Search Results

Search found 8990 results on 360 pages for 'response transmitfile'.

Page 84/360 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • World Record Oracle Business Intelligence Benchmark on SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered the first and best performance of 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10. A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON. The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec. The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server. Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time. A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory. The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load. The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance. Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI. Performance Landscape Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark. Oracle BI EE 11g Benchmark System Number of Users Response Time (sec) 1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36 Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark. Oracle BI EE 10g Benchmark System Number of Users 2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000 1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000 Configuration Summary Hardware Configuration: SPARC T4-4 server 4 x SPARC T4-4 processors, 3.0 GHz 128 GB memory 4 x 300 GB internal SSD Storage Configuration: "> Sun ZFS Storage 7120 16 x 146 GB disks Software Configuration: Oracle Solaris 10 8/11 Oracle Solaris Studio 12.1 Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0) Oracle WebLogic Server 10.3.5 Oracle Internet Directory 11.1.1.6.0 Oracle Database 11g Release 2 Benchmark Description Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more. The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard. The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario. See Also SPARC T4-4 Server oracle.com OTN Oracle Business Intelligence oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN WebLogic Suite oracle.com OTN Oracle Solaris oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Convert an Enum to String

    - by Aamir Hasan
     Retrieves the name of the constant in the specified enumeration that has the specified value. If you have used an enum before you will know that it can represent numbers (usually int but also byte, sbyte, short, ushort, int, uint, long, and ulong) but not strings. I created my enum and I was in the process of coding up a lookup table to convert my enum parameter back into a string when I found this handy method called Enum.GetName(). using System;public class GetNameTest { enum Colors { Red, Green, Blue, Yellow }; enum Styles { Plaid, Striped, Tartan, Corduroy }; public static void Main() {Response.Write("The 4th value of the Colors Enum is" + Enum.GetName(typeof(Colors), 3));Response.Write("The 4th value of the Styles Enum is "+ Enum.GetName(typeof(Styles), 3)); }}Reference:http://msdn.microsoft.com/en-us/library/system.enum.getname.aspxhttp://www.studentacad.com/post/2010/03/31/Convert-an-Enum-to-String.aspx

    Read the article

  • SQL Server Intellisense VS. Red Gate SQL Prompt

    Fabiano Amorim is hooked on today's Integrated Development Environments with built-in Intellisense, so he looked forward keenly to SQL Server 2008's native intellisense. He was disappointed at how it turned out, so turned instead to SQL Prompt. Fabiano explains why he prefers to SQL Prompt, why he reckons it fits in with the way that database developers work, and goes on to describe some of the features he'd like to see in it SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: The Client

    - by Your DisplayName here!
    If you wondered how a client would have to look like to work with the authentication framework, it is pretty straightfoward: Request a token Put that token on the authorization header (along with a registered scheme) and make the service call e.g.: var oauth2 = new OAuth2Client(_oauth2Address); var swt = oauth2.RequestAccessToken( "username", "password", _baseAddress.AbsoluteUri);   var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt); var response = client.Get("identity"); response.EnsureSuccessStatusCode(); HTH

    Read the article

  • Force SSL using 301 Redirect on IIS7 gets 401.1 Error

    - by user2879305
    I've got a site that is using an Execute URL in the 403.4 error page slot that calls a page named forcessl.aspx. Here's the contents of the file: strWork = Replace(strQUERY_STRING, "http", "https") strWork = Replace(strWork, "403;", "") strWork = Replace(strWork, "80", "") strSecureURL = strWork Response.Write(strSecureURL) Response.Redirect(strSecureURL) Catch ex As Exception End Try End If % This particular site gets a 401.1 error if https:// is not added to the url. I have several other sites using the same method that work fine and this one mirrors those in all ways that I can tell (folder permissions, etc). This new site is just a subdomain of the same domain that the other sites are using. The main domain has a wildcard SSL cert. What else should I check?

    Read the article

  • Keep website and webservices warm with zero coding

    - by oazabir
    If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how: First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “c:\program files…\IIS 6.0 ResourceKit\Tools'\tinyget” to the server where your IIS 6.0 or IIS 7 is running. Then create a batch file that will hit the pages and webservices. Something like this: SET TINYGET=C:\Program Files (x86)\IIS Resources\TinyGet\tinyget.exe"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200 First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit. Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code. Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this: "%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200 Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

    Read the article

  • UnicodeEncodeError when uploading files in Django admin

    - by Samuel Linde
    Note: I asked this question on StackOverflow, but I realize this might be a more proper place to ask this kind of question. I'm trying to upload a file called 'Testaråäö.txt' via the Django admin app. I'm running Django 1.3.1 with Gunicorn 0.13.4 and Nginx 0.7.6.7 on a Debian 6 server. Database is PostgreSQL 8.4.9. Other Unicode data is saved to the database with no problem, so I guess the problem must be with the filesystem somehow. I've set http { charset utf-8; } in my nginx.conf. LC_ALL and LANG is set to 'sv_SE.UTF-8'. Running 'locale' verifies this. I even tried setting LC_ALL and LANG in my nginx init script just to make sure locale is set properly. Here's the traceback: Traceback (most recent call last): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 307, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 197, in inner return view(request, *args, **kwargs) File "/srv/django/letebo/app/cms/admin.py", line 81, in change_view return super(PageAdmin, self).change_view(request, obj_id) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 28, in _wrapper return bound_func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 24, in bound_func return func(self, *args2, **kwargs2) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/transaction.py", line 217, in inner res = func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 985, in change_view self.save_formset(request, form, formset, change=True) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 677, in save_formset formset.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 482, in save return self.save_existing_objects(commit) + self.save_new_objects(commit) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 613, in save_new_objects self.new_objects.append(self.save_new(form, commit=commit)) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 717, in save_new obj.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 504, in save_base self.save_base(cls=parent, origin=org, using=using) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 543, in save_base for f in meta.local_fields if not isinstance(f, AutoField)] File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 255, in pre_save file.save(file.name, file, save=False) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 92, in save self.name = self.storage.save(name, content) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 48, in save name = self.get_available_name(name) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 74, in get_available_name while self.exists(name): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 218, in exists return os.path.exists(self.path(name)) File "/srv/.virtualenvs/letebo/lib/python2.6/genericpath.py", line 18, in exists st = os.stat(path) UnicodeEncodeError: 'ascii' codec can't encode characters in position 52-54: ordinal not in range(128) I tried running Gunicorn with debugging turned on, and the file uploads without any problem at all. I suppose this must mean that the issue is with Nginx. Still beats me where to look, though. Here are the raw response headers from Gunicorn and Nginx, if it makes any sense: Gunicorn: HTTP/1.1 302 FOUND Server: gunicorn/0.13.4 Date: Thu, 09 Feb 2012 14:50:27 GMT Connection: close Transfer-Encoding: chunked Expires: Thu, 09 Feb 2012 14:50:27 GMT Vary: Cookie Last-Modified: Thu, 09 Feb 2012 14:50:27 GMT Location: http://my-server.se:8000/admin/cms/page/15/ Cache-Control: max-age=0 Content-Type: text/html; charset=utf-8 Set-Cookie: messages="yada yada yada"; Path=/ Nginx: HTTP/1.1 500 INTERNAL SERVER ERROR Server: nginx/0.7.67 Date: Thu, 09 Feb 2012 14:50:57 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: close Vary: Cookie 500 UPDATE: Both locale.getpreferredencoding() and sys.getfilesystemencoding() outputs 'UTF-8'. locale.getdefaultlocale() outputs ('sv_SE', 'UTF8'). This seem correct to me, so I'm still not sure why I keep getting these errors.

    Read the article

  • Making your WCF Web Apis to speak in multiple languages

    - by cibrax
    One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them. The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english. However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/1.es” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page.  Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them. Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis. [ServiceContract][Export]public class ProductResource{ IProductRepository repository;  [ImportingConstructor] public ProductResource(IProductRepository repository) { this.repository = repository; }  [WebGet(UriTemplate = "{id}")] public Product Get(string id, HttpResponseMessage response) { var product = repository.GetById(int.Parse(id)); if (product == null) { response.StatusCode = HttpStatusCode.NotFound; response.Content = new StringContent(Messages.OrderNotFound); }  return product; }} The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example, Messages.resx contains “OrderNotFound”: “Order Not Found” Messages.es.resx contains “OrderNotFound”: “No se encontro orden” The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header. public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>{ string defaultLanguage = null;  public CultureProcessor(string defaultLanguage = "en") { this.defaultLanguage = defaultLanguage; this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage; this.OutArguments[0].Name = "culture"; }  public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request) { CultureInfo culture = null; if (request.Headers.AcceptLanguage.Count > 0) { var language = request.Headers.AcceptLanguage.First().Value; culture = new CultureInfo(language); } else { culture = new CultureInfo(defaultLanguage); }  Thread.CurrentThread.CurrentCulture = culture; Messages.Culture = culture;  return new ProcessorResult<CultureInfo> { Output = culture }; }}   As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation   The following code shows the implementation of the processor for handling languages as url extensions.   public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>{ public CultureExtensionProcessor() { this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri; }  public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage) { var requestUri = httpRequestMessage.RequestUri.OriginalString;  var extensionPosition = requestUri.LastIndexOf(".");  if (extensionPosition > -1) { var extension = requestUri.Substring(extensionPosition + 1);  var query = httpRequestMessage.RequestUri.Query;  requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;  var uri = new Uri(requestUri);  httpRequestMessage.Headers.AcceptLanguage.Clear();  httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));  var result = new ProcessorResult<Uri>();  result.Output = uri;  return result; }  return new ProcessorResult<Uri>(); }} The last step is to inject both processors as part of the service configuration as it is shown bellow, public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode){ processors.Insert(0, new CultureExtensionProcessor()); processors.Add(new CultureProcessor());} Once you configured the two processors in the pipeline, your service will start speaking different languages :). Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/products.es” does not work, but “http://localhost/ProductCatalog/products/1.es” does.

    Read the article

  • SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

    - by Brian
    Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds. This world record is the first to run online and batch workloads concurrently. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload. Performance Landscape PeopleSoft HR Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-2 (db) 18,000 0.944 0.503 43.32 64 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory 5 x 300 GB SAS internal disks 1 x 100 GB and 2 x 300 GB internal SSDs 2 x 10 Gbe HBA Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 11 11/11 Oracle Database 11g Release 2 Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 2 x 300 GB SAS internal disks 1 x 100 GB internal SSD Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Management oracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

    Read the article

  • Error pages in ASP.NET

    - by koevoeter
    In ASP.NET you can retrieve the last unhandled exception via:(HttpContext.Current.)Server.GetLastError() // Server object is available as a property in Page and UserControl context This obviously only works in the same roundtrip. If you want to retrieve this information in your error page, you got a problem because the error page is not returned in the same roundtrip. The server responds with a redirect response and a new request to the error page is automatically sent. A common workaround would be to store the exception in your Session state from the Application_Error event in Global.asax. From ASP.NET 3.5 you can configure the redirect mode for error pages: <customErrors mode="On" defaultRedirect="~/Error.aspx" redirectMode="ResponseRewrite" /> This way the redirect response is not sent and the error page is returned right away. That implies that the browser is not aware of a page change and cannot reflect it in the address bar, so your original URL is not replaced with the URL of the error page, which might be what you actually want…

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • What's your advise on a potential legal suit? [closed]

    - by ohho
    I [xxx app developer] received an email from Apple that a developer [of yyy app] believes I am "infringing their copyright." Description of Issue: [xxx developer] copied my application (my application is [yyy]) feature by feature. Even their donation model is completely copied from my application. Their first release was significantly later than mine, which implies copying of the application rather than parallel development. I suffered significant financial losses because of their actions, in additional to promotion problems as many people are confused with their product. My advertising was based around the idea of a "free [yyy] application for an iPhone" and they have just taken that as a title for their application. I would appreciate if someone takes a look at their release schedule and compare it to my releases. Additionally, please take a look at their functionality and how it point by point copies the functionality of my older releases. I am asking Apple to remove their application from the App Store, and ban them from resubmitting it. Thank you for your time! [yyy developer], the developer of the [yyy] application. My response was: The code of [xxx] is written by myself, using Apple public API. The graphics elements are designed by myself. The user interface and app control are independently designed and different from other [similar type] apps (please judge yourself). In-app Purchase is iOS Apple standard API. iAd is Apple iOS standard API. I don't think features can be owned exclusively. In fact, my app comes with fewer features, as I prefer minimalist design. I don't think idea can be owned exclusively. Apple responded: Thank you for your response. Unfortunately, Apple cannot serve as arbiter for disputes among third parties. Please contact [yyy developer] directly regarding your actions. You can reach [yyy developer] through: [...]. We look forward to confirmation from both parties that this issue has been resolved. If this issue is not resolved shortly, Apple may be forced to pull your application(s) from the App Store. Then I sent my response above to [yyy developer]. [yyy developer] then asked me "to provide (my) legal address and contact details that (his) lawyer requires to file a copyright infringement suit." IMO, I don't think the [yyy developer]'s claim on "feature by feature" copy is valid. I have fewer features, completely different user interface design. However, I don't think I can afford a legal action for an app of so little financial return. So what's your advise on this? Should I just let Apple pull my app? Or is there any alternative I can consider? FYI ... UI of [xxx app]: and UI of [yyy app]:

    Read the article

  • Announcing: Oracle's Sun Flash Accelerator F80 PCIe Card

    - by uwes
    Ramp Up Your Server Performance with Oracle's Sun Flash Accelerator F80 PCIe Card! Oracle’s Sun Flash Accelerator F80 PCIe Card accelerates IO-starved applications and server performance by reducing storage latencies and increasing I/O throughput for greater productivity and business response! Sun Flash Accelerator F80 PCIe Card offers the following: Helps servers and their applications run faster and more efficient, while reducing power and space With 800GB capacity, delivers 2x the capacity of the previous F40 Flash Card for less than half the $/GB Accelerates I/O constrained databases with increased IOPS and consistent low-latency response timers Current and planned server support includes: The F80 is currently supported in Oracle’s SPARC T4-1, T4-2 and X4-2L servers.  SPARC T5, M5, M6 and Fujitsu M10 server support is planned for December 2013 (Preliminary only) Please read the Sales Bulletin on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle.com Flash Page Oracle Technology Network Flash Page

    Read the article

  • From Zero To Deployed Contest&ndash;Prizes Announced

    - by Robz / Fervent Coder
    Do you have what it takes to meet the challenge? We’ll make it worth it. You may have noticed at the end of my last post I threw down the community challenge to get from zero to deployed faster than me. The Challenge My time was 13:48 to be from zero to deployed. Beat my time and show it in a video response. The person with the best time by March 15th @ 11:59PM CST will receive a prize. Here are the links to the videos: #1 - http://www.youtube.com/watch?v=cZIUVfHWsbc #2 - http://www.youtube.com/watch?v=l7WluaXIya0 #3 - http://www.youtube.com/watch?v=IqPh7wbWsLc The Rules Let’s revisit those ground rules before I tell you what the prizes will be: Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) The Prizes The prize package for the best time is:                      $50 Gift Card or equivalent – Provided by yours truly. AppHarbor $100 service credit – AppHarbor will provide a $100 credit for their services once they launch payments. Thank you to the folks at AppHarbor! ReSharper - Jetbrains will provide a FULL license of ReSharper Personal. This license is a $199 value. Thank you to the folks at Jetbrains! Telerik Ultimate Collection for .NET – Telerik will provide a license to pretty much every .NET tool they offer. This license is a $1999 value. A big thank you to the folks at Telerik!! This is a total value of $2348!!! The prize package for the person that has the most creative video(s) with a time better than mine (if there are at least 5 responses):           $20 Gift card or equivalent – Provided by this guy. AppHarbor $50 service credit – same deal as above. Thank you AppHarbor! Twilio T-Shirt - Twilio has donated a shirt and will ship your size to you (this may be subject to US residents only). This is a $25 value. Thank you to the folks at Twilio!

    Read the article

  • fsockopen() error : Network is unreachable port 43 in php [closed]

    - by hamid
    i've writed some Php code that lookup for domain (whois) but it fails !! this is some of my code : function checkdomain($server,$domain){ global $response; $connection = fsockopen($server,43); fputs($connection, "domain " . $domain . "\r\n"); while(!feof($connection)){ $response .= fgets($connection, 4096); } fclose($connection); } checkdomain("whois.crsnic.net","www.example.com"); the code work on my localhost ( apache,php,mysql, OS - Win XP ) but when i uploaded it to my host (Linux) it failed. and i always see the Below Error/message : Warning: fsockopen() [function.fsockopen]: unable to connect to whois.crsnic.net:43 (Network is unreachable) in /home/hamid0011/public_html/whois/whois.php on line 37 what should i do ? is this my host's problem or whois server ( but it work in localhost ) or my code ? TNX

    Read the article

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • Problem with Bash script: 'declare: not found'

    - by Ashfame
    I had a script which was running fine but when I ran it today, it says declare: not found. I am using bash shell and path at the starting of the script is correct. Two flagged lines in my script are as follows: declare -a RESPONSE RESPONSE=($RESULT) It also says ( is unexpected but I guess that is coming up because of the first error. Worth mentioning point is when I type in declare directly works fine. declare | grep USER shows USER=ashfame USERNAME=ashfame values="$SVN_BASH_USERNAME"; So, whats wrong here?

    Read the article

  • Azure Mobile Services: available modules

    - by svdoever
    Azure Mobile Services has documented a set of objects available in your Azure Mobile Services server side scripts at their documentation page Mobile Services server script reference. Although the documented list is a nice list of objects for the common things you want to do, it will be sooner than later that you will look for more functionality to be included in your script, especially with the new provided feature that you can now create your custom API’s. If you use GIT it is now possible to add any NPM module (node package manager module, say the NuGet of the node world), but why include a module if it is already available out of the box. And you can only use GIT with Azure Mobile Services if you are an administrator on your Azure Mobile Service, not if you are a co-administrator (will be solved in the future). Until now I did some trial and error experimentation to test if a certain module was available. This is easiest to do as follows:   Create a custom API, for example named experiment. In this API use the following code: exports.get = function (request, response) { var module = "nonexistingmodule"; var m = require(module); response.send(200, "Module '%s' found.", module); }; You can now test your service with the following request in your browser: https://yourservice.azure-mobile.net/api/experiment If you get the result: {"code":500,"error":"Error: Internal Server Error"} you know that the module does not exist. In your logs you will find the following error: Error in script '/api/experiment.json'. Error: Cannot find module 'nonexistingmodule' [external code] atC:\DWASFiles\Sites\yourservice\VirtualDirectory0\site\wwwroot\App_Data\config\scripts\api\experiment.js:3:13[external code] If you require an existing (undocumented) module like the OAuth module in the following code, you will get success as a result: exports.get = function (request, response) { var module = "oauth"; var m = require(module); response.send(200, "Module '" + module + "' found."); }; If we look at the standard node.js documentation we see an extensive list of modules that can be used from your code. If we look at the list of files available in the Azure Mobile Services platform as documented in the blog post Azure Mobile Services: what files does it consist of? we see a folder node_modules with many more modules are used to build the Azure Mobile Services functionality on, but that can also be utilized from your server side node script code: apn - An interface to the Apple Push Notification service for Node.js. dpush - Send push notifications to Android devices using GCM. mpns - A Node.js interface to the Microsoft Push Notification Service (MPNS) for Windows Phone. wns - Send push notifications to Windows 8 devices using WNS. pusher - Node library for the Pusher server API (see also: http://pusher.com/) azure - Windows Azure Client Library for node. express - Sinatra inspired web development framework. oauth - Library for interacting with OAuth 1.0, 1.0A, 2 and Echo. Provides simplified client access and allows for construction of more complex apis and OAuth providers. request - Simplified HTTP request client. sax - An evented streaming XML parser in JavaScript sendgrid - A NodeJS implementation of the SendGrid Api. sqlserver – In node repository known as msnodesql - Microsoft Driver for Node.js for SQL Server. tripwire - Break out from scripts blocking node.js event loop. underscore - JavaScript's functional programming helper library. underscore.string - String manipulation extensions for Underscore.js javascript library. xml2js - Simple XML to JavaScript object converter. xmlbuilder - An XML builder for node.js. As stated before, many of these modules are used to provide the functionality of Azure Mobile Services platform, and in general should not be used directly. On the other hand, I needed OAuth badly to authenticate to the new v1.1 services of Twitter, and was very happy that a require('oauth') and a few lines of code did the job. Based on the above modules and a lot of code in the other javascript files in the Azure Mobile Services platform a set of global objects is provided that can be used from your server side node.js script code. In future blog posts I will go into more details with respect to how this code is built-up, all starting at the node.js express entry point app.js.

    Read the article

  • Podcast: Dell Perot Systems Relies on Oracle In-Memory Database Cache

    - by john.brust
    Recently we spoke with Bill Binko, Technology Consultant at Dell Perot Systems, about a high volume web-based content delivery system they implemented for a client with Oracle In-Memory Database Cache. Their client needed to respond to ~1 billion hits (web requests) per day, but hadn't been able to support this load. Oracle In-Memory Database Cache allowed for multiple & complicated queries to take place without ever hitting the disk...providing sub-millisecond response time and ability to manage much higher high volumes of data. Old System: Old SQL Server Database, over 300 servers, difficult to maintain. New System: One Oracle Database 11g instance, multiple Oracle RAC nodes, backed up by Oracle Data Guard, and Oracle In-Memory Database Cache to cut query response times by 10x. Listen to the podcast.

    Read the article

  • Scan problem -Fujitu Scansnap S1300i

    - by user214312
    I'm trying to install a scansnap s1300i scanner on ubuntu 13.10. It is still not working. My question: With a "sudo scanimage -L" command, I receive the response device hpaio:/net/Photosmart_C5100_series?zc=HPBA915D' is a Hewlett-Packard Photosmart_C5100_series all-in-one deviceepjitsu:libusb:002:011' is a FUJITSU ScanSnap S1300i scanner While, with a "scanimage -L"command I receive the response device `hpaio:/net/Photosmart_C5100_series?zc=HPBA915D' is a Hewlett-Packard Photosmart_C5100_series all-in-one What can go wrong ? This is probably a part of the problem. Update: "simple-scan" do not work while "sudo simple-scan" works Thanks.

    Read the article

  • what is need for a handler in general

    - by nish
    I have been searching for a definition for handler. basics i've understood that "A handler is a piece of code that is called when something happens, and usually takes some action, like generating a response." - (from http://stackoverflow.com/questions/3246200/what-is-an-handler ). But that can be a trigger or a callback. Also in specific an event handler on a low-level , often works by polling a device and waiting for a hardware response. So, what is the specific role of a handler ( that makes it unique from a trigger or a callback or any other such function ). Do all handlers have similar role ( event handler, file handler , exception handler, error handler )...

    Read the article

  • Comunicate NodeJS and PHP

    - by Zenth
    i need ideas to solve this: I have a entire website in PHP (5.2) in a PHP "shared server", only i can use apache+PHP, CGI & NodeJS, no memcached, redis or another software. And i need to comunicate the PHP and the NodeJS Script. My first approach is using socket connection, creating in NodeJS a socket listener and connect to it witch PHP, and then, send commands, whait for response, and close connection (and end PHP Script). To the other side, i can call PHP script via ¿httprequest? ¿or using sockets again? The problem of using sockets fron Node to PHP, i CANT leave PHP script runing with set_time_limit(0) because the fuc... server, need to "call" PHP for another way. The NodeJS and Apache + PHP are in the same machine, i need to make the code for the fast response time (sockets better than web-calls). Better ideas or other solutions? thanks!

    Read the article

  • How to enable SSIS as data source type on SQL Server Reporing Services SSRS 2008 R2

    when you create a data source in SSRS 2008 R2 (Nov CTP), you won't be able to get SSIS listed as a data source type. Therefore applications that are already using it as a data source or applications that require it as a data source get stuck. Let's learn how to enable and get SSIS listed back as a data source in SSRS 2008 R2. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >