Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 34/232 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Oracle Business Intelligence integration with Oracle Open Office

    - by Harald Behnke
    A highlight of the latest Oracle Office product launches are the first Oracle application connectors introduced with Oracle Open Office 3.3. The Oracle Open Office Connector for Oracle Business Intelligence perfectly demonstrates the advantages of enterprise and office productivity software engineered to work together. The connector enables you to access and run Oracle Business Intelligence Enterprise Edition requests directly within Oracle Open Office. The refreshable requests leverage not only native Open Office functionality but also the scalability and performance of the Oracle Oracle Business Intelligence server (R10.x). The requests reference a single source of information as defined in the Oracle Business Intelligence server data thus ensuring consistent information across the enterprise. See how it works in the demo video: Beyond the dramatic license cost savings for Oracle Business Intelligence customers using Oracle Open Office, the joint engineering efforts result in usability and efficiency benefits not available with Microsoft Office: Import styles and conditional formats defined in Business Intelligence answersApply customized styles, direct or conditional formats to Oracle Business Intelligence data - all changes are preserved during refreshChange chart properties for Oracle Open Office charts - all changes are preserved during refresh Read more about the Oracle Open Office enterprise features.

    Read the article

  • Only 192.168.0.3 can request, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • handling a GET error properly

    - by Andrew Heath
    I have a website that takes two primary get strings: ?type=GAME&id=SomeGameID ?type=SCENARIO&id=SomeScenarioID for reasons unknown, I have recently begun receiving requests for erroneous get strings from both Yandex and Baidu. They are always in the form of: ?type=GAME&id=SomeScenarioID None of my users are triggering these errors, so I am (sort of) confident that this is not due to an HTML template error somewhere on my part. There is also no HTTP_REFER showing up in the $_SERVER array, so I'm guessing these are direct requests from bad dbase data on their part. I see two options for dealing with these bad requests, and would like to know which is recommended... or if there are other, better options I have not thought of: simply 404 the request, since it is incorrect redirect the request to ?type=SCENARIO&id=SomeScenarioID because the scenario IDs are always valid, the breakage is due to asking for the wrong type.

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   [Read More] 

    Read the article

  • Application qos involving priority and bandwidth

    - by Steve Peng
    Our manager wants us to do applicaiton qos which is quite different from the well-known system qos. We have many services of three types, they have priorites, the manager wants to suspend low priority services requests when there are not enough bandwidth for high priority services. But if the high priority services requests decrease, the bandwidth for low priority services should increase and low priority service requests are allowed again. There should be an algorithm involving priority and bandwidth. I don't know how to design the algorithm, is there any example on the internet? Somebody can give suggestion? Thanks. UPDATE All these services are within a same process. We are setting the maximum bandwidth for the three types of services via ports of services via TC (TC is the linux qos tool whose name means traffic control).

    Read the article

  • Remote Diagnostic Agent (RDA) version 4.30

    - by inowodwo
    posted by Maurice Bauhahn Remote Diagnostic Agent (RDA) version 4.30 was released on December 11th A free download can be accessed via Knowledge Management article 314422.1 and installed in any Enterprise Performance Management 11.1.2.x environment. EPM-specific instructions are available in Knowledge Management article 1304885.1. This RDA version incorporates two new modules (EAS=Essbase Administration Services; HWA=Hyperion Web Analysis) and improvements in modules and profiles relating to twelve other Hyperion applications (EPM, EPMA, ESS, FCM, HFM, HFR, HIR, HPL, HPSV, HSS, PR, and HSV). To follow best practice, run related RDA profiles [for example: "perl rda.pl -vnSCRPp Hyperion1112_EAS"] and attach the output zip file [by default in \rda\output\] to your service requests. The comprehensive set of details provided in such output files should help technicians to avoid delays in handling service requests (by avoiding ping-pong communications resulting from repeated requests for additional values).

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • Can't get simple Apache VHost up and running

    - by TK Kocheran
    Unfortunately, I can't seem to get a simple Apache VHost online. I used to simply have one VHost which bound to all: <VirtualHost *:80>, but this isn't appropriate for security anymore. I need to have one VHost for localhost requests (ie my dev server) and one for incoming requests via my domain name. Here's my new VHost: NameVirtualHost domain1.com <VirtualHost domain1.com:80> DocumentRoot /var/www ServerName domain1.com </VirtualHost> <VirtualHost domain2.com:80> DocumentRoot /var/www ServerName domain2.com </VirtualHost> After I restart my server, I see the following errors in my log: [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs What am I doing wrong? EDIT As per the answer give below, I have modified my configuration. Here are my configuration files: /etc/apache2/ports.conf: Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Here are my actual defined sites: /etc/apache2/sites-enabled/000-localhost: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerAdmin ######### DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 <Location /> <Limit GET POST PUT> order allow,deny allow from all deny from 65.34.248.110 deny from 69.122.239.3 deny from 58.218.199.147 deny from 65.34.248.110 </Limit> </Location> </VirtualHost> /etc/apache2/sites-enabled/001-rfkrocktk.dyndns.org: NameVirtualHost rfkrocktk.dyndns.org:80 <VirtualHost rfkrocktk.dyndns.org:80> DocumentRoot /var/www ServerName rfkrocktk.dyndns.org </VirtualHost> And, just for kicks, my main file: /etc/apache2/apache2.conf: # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ what else do I need to do to fix it? Should I be telling apache to listen on 127.0.0.1:80, or isn't it already listening there?

    Read the article

  • Pull Request Changes, Multi-Selection in Advanced View, and Advertisement Changes

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website today. Pull Request Changes In this release, we have begun to re-focus on Pull Requests to ensure a productive experience between the project users and developers. We feel we made significant progress in this area for this release and look forward to using your feedback to drive future iterations. One of the biggest hurdles people have indicated is the inability to see what a pull request includes without pulling the source down from a Mercurial client. With today’s changes, any user has the ability to view a pull request, the changesets / changes included, and perform an inline diff of the file. When a pull request is made, the CodePlex website will query for all outgoing changes from the fork to the main repository for a point-in-time comparison. Because of this point-in-time comparison… All existing pull requests created prior to this release will not have changesets associated with them. If new commits are pushed to the fork while a pull request is active, they will not appear associated with the pull request. The pull request will need to be re-submitted for them to appear. Once a pull request is created, you can “View the Pull Request” which takes you to a page that looks like As you may notice, we now display a lot more detailed information regarding that pull request including who it was requested by and when, the associated changesets, the description, who it’s assigned to (we’ll come back to this) and the listing of summarized file changes. What you’ll also notice, is that each modified file has the ability to view a diff of all changes made. When you click “(view diff)” for a file, an inline diff experience appears. This new experience allows you to quickly navigate through all of the modified files as well as viewing the various change blocks for each file. You’ll also notice as you browse through each file’s changes, we update the URL to include the file path so you can quickly send a direct link to a pull request’s file. Clicking “(close diff)” will bring you back to the original pull request view. View this pull request live on WikiPlex. Pull Request Review Assignment Another new feature we added for pull requests is the ability for project members to assign pull requests for review. Any project member has the ability to assign (and re-assign if needed) a pull request to a project member. Once the assignment has been made, that project member will be notified via email of the assignment. Once they complete the review of the pull request, they can either accept or deny it similarly to the previous process. Multi-Selection in Advanced View Filters One of the more recent requests we have heard from users is the ability multi-select advanced view filters for work items. We are happy to announce this is now possible. Simply control-click the multiple options for each filter item and your work item query will be refined as such. Should you happen to unselect all options for a given filter, it will automatically reset to the default option for that filter. Furthermore, the “Direct Link” URL will be updated to include the multi-selected options for each filter. Note: The “Direct Link” feature was released in our previous deployment, just never written about. It allows you to capture the current state of your query and send it to other individuals. Advertisement Changes Very recently, the advertiser (The Lounge) we partnered to provide advertising revenue for projects, or donated to charity, was acquired by Lake Quincy Media. There has been no change in the advertising platform offering, and all projects have been converted over to using the new infrastructure. Project owners should note the new contact information for getting paid. The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • Configure a WinHTTP application to use Fiddler.

    - by ajit goel
    I need to see the actual requests being made from a asp page to the webservice(which calls another webservice). All these requests happen on the same local box. I ran the "proxycfg -p http=127.0.0.1:8888;https=127.0.0.1:8888" on the command prompt based on http://www.fiddler2.com/fiddler/help/hookup.asp#Q-WinHTTP: How can I configure a WinHTTP application to use Fiddler? I now see the webservice wsdl requests in Fiddler but not the actual requests. Would someone know why??

    Read the article

  • c# Network Programming - HTTPWebRequest Scraping

    - by masterguru
    Hi, I am building a web scraping application. It should scrape a complex web site with concurrent HttpWebRequests from a single host to a single target web server. The application should run on Windows server 2008. One single HttpWebRequest for data could take from 1 minute to 4 minutes to complete (because of long running db operations) I should have at least 100 parallel requests to the target web server, but i have noticed that when i use more then 2-3 long-running requests i have big performance issues (request timeouts/hanging). How many concurrent requests can i have in this scenario from a single host to a single target web server? can i use Thread Pools in the application to run parallel HttpWebRequests to the server? will i have any issues with the default outbound HTTP connection/requests limits? what about Request timeouts when i reach outbound connection limits? what would be the best setup for my scenario? Any help would be appreciated. Thanks

    Read the article

  • Rails: show some examples of code from controllers, models and views

    - by Totty
    Hy, my controller example: class FriendsController < ApplicationController before_filter :authorize, :except => [:friends] ############## ############## ## REQUESTS ## ############## ############## ################## # GET MY FRIENDS # ################## # Get my friends. def friends @friends = @my_profile.friends.paginate({:page => params[:page], :per_page => 3}) @profile = @my_profile end ################### # REMOVED FRIENDS # ################### # Get my deleted friends. def removed_friends @removed_friends = @my_profile.friends('removed_friends', params[:page]) end ################### # PENDING FRIENDS # ################### # Friend requests made by other profiles to me. def pending_friends @pending_friends = @my_profile.friends('pending_friends', params[:page]) end ############################ # REJECTED PENDING FRIENDS # ############################ # Rejected friend requests made by other profiles to me. def rejected_pending_friends @rejected_pending_friends = @my_profile.friends('rejected_pending_friends', params[:page]) end ##################### # REQUESTED FRIENDS # ##################### # The friend requests I've sent to others profiles. def requested_friends @requested_friends = @my_profile.friends('requested_friends', params[:page]) end ############################# # DELETED REQUESTED FRIENDS # ############################# # The requests I've sent to others # profiles and then canceled. def deleted_requested_friends @deleted_requested_friends = @my_profile.friends('deleted_requested_friends', params[:page]) end ############# ############# ## ACTIONS ## ############# ############# ########################## # ADD FRIENDSHIP REQUEST # ########################## # Add a friendship request. def add_friendship_request friendship = @my_profile.add_friendship_request(params[:profile_id]) render :json => friendship end ############################# # REMOVE FRIENDSHIP REQUEST # ############################# # Removes a friendship request I've done. def remove_friendship_request friendship = @my_profile.remove_friendship_request(params[:profile_id]) render :json => friendship end ###################### # PROCESS FRIENDSHIP # ###################### # Process friendship: accept or reject a friend. # This will make a new friend or # will make a new rejected pending friend. def process_friendship friendship = @my_profile.process_friendship(params[:profile_id].to_i, params[:accepted].to_i) render :json => friendship end ################### # REMOVE A FRIEND # ################### # Remove a friend from my friends by id. def remove_friend friendship = @my_profile.remove_friend(params[:profile_id]) render :json => friendship end end

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • IRequest / IResponse Pattern

    - by traderde
    I am trying to create an Interface-based Request/Response pattern for Web API requests to allow for asynchronous consumer/producer processing, but not sure how I would know what the underlying IResponse class is. public void Run() { List<IRequest> requests = new List<IRequest>(); List<IResponse> responses = new List<IResponse(); requests.Add(AmazonWebRequest); //should be object, trying to keep it simple requests.Add(EBayWebRequest); //should be object, trying to keep it simple foreach (IRequest req in requests) { responses.Add(req.GetResponse()); } foreach (IResponse resp in response) { typeof resp???? } } interface IRequest { IResponse GetResponse(); } interface IResponse { } public class AmazonWebServiceRequest : IRequest { public AmazonWebServiceRequest() { //get data; } public IResponse GetResponse() { AmazonWebServiceRequest request = new AmazonWebServiceRequest(); return (IResponse)request; } } public class AmazonWebServiceResponse : IResponse { XmlDocument _xml; public AmazonWebServiceResponse(XmlDocument xml) { _xml = xml; _parseXml(); } private void _parseXml() { //parse Xml into object; } } public class EBayWebRequest : IRequest { public EBayWebRequest () { //get data; } public IResponse GetResponse() { EBayWebRequest request = new EBayWebRequest(); return (IResponse)request; } } public class EBayWebResponse : IResponse { XmlDocument _xml; public EBayWebResponse(XmlDocument xml) { _xml = xml; _parseXml(); } private void _parseXml() { //parse Xml into object; } }

    Read the article

  • How do you prevent brute force attacks on RESTful data services

    - by Adrian Grigore
    Hi, I'm about to implement an RESTful API to our website (based on WCF data services, but that probably does not matter). All data offered via this API belongs to certain users of my server, so I need to make sure only those users have access to my resources. For this reason, all requests have to be performed with a login/password combination as part of the request. What's the recommended approach for preventing brute force attacks in this scenario? I was thinking of logging failed requests denied due to wrong credentials and ignoring requests originating from the same IP after a certain threshold of failed requests has been exceeded. Is this the standard approach, or am I a missing something important? Thanks, Adrian

    Read the article

  • Configure Apache to use different Unix User Accounts (www-data) per Site.

    - by BrainCore
    An Apache 2.x Webserver with default configurations from the ubuntu/debian repositories will use the www-data unix account for apache2 processes handling web requests. Assuming that apache is serving two different sites (domain1.com and domain2.com), is it possible for apache to use unix user www-data1 when handling requests to domain1.com, and use unix user www-data2 when handling requests to domain2.com? The motivation is to isolate the code for each domain name from one another.

    Read the article

  • LINQ SELECT COUNT(*) AND EmployeeId

    - by Mahesh
    Hi, I have a table like below: EmployeeId EmployeeName RequestId RequestName EmployeeId RequestId I need to a to assign requests in a sequential fashion(those who has mininum number of requests). Can I know how to get employee who has minimum requests using linq??? Thanks, Mahesh

    Read the article

  • What would you like to correct and/or improve in this java implementation of Chain Of Responsibility

    - by Maciek Kreft
    package design.pattern.behavioral; import design.pattern.behavioral.ChainOfResponsibility.*; public class ChainOfResponsibility { public static class Chain { private Request[] requests = null; private Handler[] handlers = null; public Chain(Handler[] handlers, Request[] requests){ this.handlers = handlers; this.requests = requests; } public void start() { for(Request r : requests) for (Handler h : handlers) if(h.handle(r)) break; } } public static class Request { private int value; public Request setValue(int value){ this.value = value; return this; } public int getValue() { return value; } } public static class Handler<T1> { private Lambda<T1> lambda = null; private Lambda<T1> command = null; public Handler(Lambda<T1> condition, Lambda<T1> command) { this.lambda = condition; this.command = command; } public boolean handle(T1 request) { if (lambda.lambda(request)) command.lambda(request); return lambda.lambda(request); } } public static abstract class Lambda<T1>{ public abstract Boolean lambda(T1 request); } } class TestChainOfResponsibility { public static void main(String[] args) { new TestChainOfResponsibility().test(); } private void test() { new Chain(new Handler[]{ // chain of responsibility new Handler<Request>( new Lambda<Request>(){ // command public Boolean lambda(Request condition) { return condition.getValue() >= 600; } }, new Lambda<Request>(){ public Boolean lambda(Request command) { System.out.println("You are rich: " + command.getValue() + " (id: " + command.hashCode() + ")"); return true; } } ), new Handler<Request>( new Lambda<Request>(){ public Boolean lambda(Request condition) { return condition.getValue() >= 100; } }, new Lambda<Request>(){ public Boolean lambda(Request command) { System.out.println("You are poor: " + command.getValue() + " (id: " + command.hashCode() + ")"); return true; } } ), }, new Request[]{ new Request().setValue(600), // chaining method new Request().setValue(100), } ).start(); } }

    Read the article

  • Is this code thread safe?

    - by Shawn Simon
    ''' <summary> ''' Returns true if a submission by the same IP address has not been submitted in the past n minutes. ''' </summary> Protected Function EnforceMinTimeBetweenSubmissions() As Boolean Dim minTimeBetweenRequestsMinutes As Integer = 0 Dim configuredTime = ConfigurationManager.AppSettings("MinTimeBetweenSchedulingRequestsMinutes") If String.IsNullOrEmpty(configuredTime) Then Return True If (Not Integer.TryParse(configuredTime, minTimeBetweenRequestsMinutes)) _ OrElse minTimeBetweenRequestsMinutes > 1440 _ OrElse minTimeBetweenRequestsMinutes < 0 Then Throw New ApplicationException("Invalid configuration setting for AppSetting 'MinTimeBetweenSchedulingRequestsMinutes'") End If If minTimeBetweenRequestsMinutes = 0 Then Return True End If If Cache("submitted-requests") Is Nothing Then Cache("submitted-requests") = New Dictionary(Of String, Date) End If ' Remove old requests. Dim submittedRequests As Dictionary(Of String, Date) = CType(Cache("submitted-requests"), Dictionary(Of String, Date)) Dim itemsToRemove = submittedRequests.Where(Function(s) s.Value < Now).Select(Function(s) s.Key).ToList For Each key As String In itemsToRemove submittedRequests.Remove(key) Next If submittedRequests.ContainsKey(Request.UserHostAddress) Then ' User has submitted a request in the past n minutes. Return False Else submittedRequests.Add(Request.UserHostAddress, Now.AddMinutes(minTimeBetweenRequestsMinutes)) End If Return True End Function

    Read the article

  • HTTP HEAD Request and System.Web.Mvc.FileResult

    - by mnero0429
    I'm using BITS to make requests to a ASP.NET MVC controller method named Source that returns a FileResult. I know the type FilePathResult uses HttpResponse.TransmitFile, but I don't know if HttpResponse.TransmitFile actually writes the file to the response stream regardless of the request type. My question is, does FileResult only include the header information on HEAD requests, or does it transmit the file regardless of the request type? Or, do I have to account for HEAD requests myself?

    Read the article

  • Qhttp request and response debugging.

    - by William Wilson
    OS: Windows XP/Vista Qt version: 4.6.1 Using OpenSSL I need to watch the actual requests and responses that is going through the wire for QHttp requests and responses and in some cases need to interrupt the request. I tried with few of the http debuggers available in the market but they seem to work only for requests that are using the WinInet functions. Unfortunately, the openssldump utility is not present on windows platforms. Thank you.

    Read the article

  • Website stress test in Python - Django

    - by RadiantHex
    Hi folks, I'm trying to build a small stress test script to test how quickly a set of requests gets done. Need to measure speed for 100 requests. Problem is that I wouldn't know how to implement it, as it would require parallel url requests to be called. Any ideas?

    Read the article

  • sendmail and MX records when mail server is not on web host

    - by Jim Nelson
    This is a problem I'm sure is easy to fix, but I've been banging my head on it all day. I'm developing a new web site for a client. The web site resides at (this is an example) website.com. I have a PHP form script to email visitors' requests to [email protected]. When I coded this on a staging server on a different domain, all worked fine. When I moved it to website.com, the mail messages never arrived. The web server is on a virtual host with a major ISP. Here's what I've learned since then: My client's mail server is Microsoft Exchange on a box physically in their office. Whenever someone on the outside world emails [email protected], the mail arrives. But if the web server sends to the same email address, it fails every time. This is not a PHP problem. I secure shell in to the web server and have tested this both with sendmail and the UNIX mail application. I've also tested it by emailing various email accounts from the shell. I can email myself, for example, just nobody at the website.com domain. In short, when I'm logged in to website.com, mail to [email protected], [email protected], [email protected] all fail. All other addresses work fine. What I've discovered is those dropped emails are routed to the web server's "catchall" account where they sit in its inbox. I've done an MX lookup on website.com. The MX record points to mailsec.website.com. I can telnet to mailsec.website.com port 25 and see the SMTP server. It appears to me that website.com isn't doing an MX lookup when it's sending mail to [email protected]. My theory is that it recognizes the domain as local, sees that there's no "requests" user account to deliver it to, and drops the mail into the catchall account. What I want is to force sendmail to do the MX lookup and send the message on to the Exchange server. I'm at wit's end here. I can't figure out how to do this. For that matter, I may be way off base here and have misdiagnosed this entirely. Internet mail and MX has always seemed a black art to me, and my ignorance is certainly showing in this question.

    Read the article

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >