Search Results

Search found 259 results on 11 pages for 'graceful degradation'.

Page 6/11 | < Previous Page | 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • How to Kill and Alternate X session via cli

    - by L. D. James
    Can someone tell me how to remove dormant X sessions. This question is similar to Logging out other users from the command line, but more specific to controlling X displays which I find hard to kill. I used the command "who -u" to get the session of the other screens: $ who -u Which gave me: user1 :0 2014-08-18 12:08 ? 2891 (:0) user1 pts/26 2014-08-18 16:11 17:18 3984 (:0) user2 :1 2014-08-18 18:21 ? 25745 (:1) user1 pts/27 2014-08-18 23:10 00:27 3984 (:0) user1 pts/32 2014-08-18 23:10 10:42 3984 (:0) user1 pts/46 2014-08-18 23:14 00:04 3984 (:0) user1 pts/48 2014-08-19 04:10 . 3984 (:0) The kill -9 25745 doesn't appear to do anything. I have a workshop where a number of users will use the computer under their own login. After the workshop is over there are a number of logins that are left open. I would prefer to kill the open sessions rather than try to log into each users' screen. Again, this question isn't just about logging users' out. I'm hoping to get clarity also for killing/removing stuck processes that are hard to kill. New Info While still pondering how to kill the process I wrote the following script, which did it: #!/bin/bash results=1 while [[ $results > 0 ]] do sudo kill -9 25745 results=$? echo -ne "Response:$results..." sleep 20 done After a graceful waiting period, if there isn't a better answer I'll mark this as answered with this resolution. This may resolve the problem with other stuck processes I have had in the past.

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • Oracle Fusion Applications Design Patterns Now Available For Developers

    - by ultan o'broin
    The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. The design patterns are based on Oracle ADF components and easily implemented in Oracle JDeveloper. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments!.

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • So what is Active GridLink for RAC?

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 I had referred to Active GridLink for RAC in my blog yesterday and since then got several questions on this topic. So I decided to re-visit Active GridLink. With the release of version 11g, Oracle WebLogic Server started to provide strong support for the Real Application Clusters (RAC) features in Oracle Database 11g, minimizing database access time while allowing transparent access to rich pooling management functions that maximizes both connection performance and availability. WebLogic is the only application server in the marketplace which has been fully integrated and certified with Oracle Database RAC 11g without losing any rich functionality. Active GridLink provides Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. With the key foundation for providing deeper integration with Oracle RAC, this single data source implementation in Oracle WebLogic Server supports the full and unrestricted use of database services as the connection target for a data source. For more details and to understand how our customer NEC leverages this capability, read the whitepapers on this topic. Get in depth ‘how-to’ details from this youtube video from our resident expert, Frances Zhao.

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • Spring MVC Controller redirect using URL parameters instead of in response.

    - by predhme
    I am trying to implement RESTful urls in my Spring MVC application. All is well except for handling form submissions. I need to redirect either back to the original form or to a "success" page. @Controller @RequestMapping("/form") public class MyController { @RequestMapping(method = RequestMethod.GET) public String setupForm() { // do my stuff return "myform"; } @RequestMapping(method = RequestMethod.POST) public String processForm(ModelMap model) { // process form data model.addAttribute("notification", "Successfully did it!"); return "redirect:/form"; } } However as I read in the Spring documentation, if you redirect any parameters will be put into the url. And that doesn't work for me. What would be the most graceful way around this?

    Read the article

  • Twitter oauth authorization in a pop-up instead of in main browser window

    - by niyogi
    I feel incredibly stupid for even asking this since the answer might already be under my nose but here it goes: TweetMeme has a Re-tweet twitter widget that publishers can place on their blogs. When a user clicks on the widget, it pops open a window which allows the user to authenticate themselves with twitter and then re-tweet. This seems to use some special Twitter oauth popup form factor - unless there is something fancier happening under the surface to authenticate the user. The pop-up window looks like this: http://twitpic.com/1kepcr I'd rather handle an authentication via a pop-up rather than send the user to a brand new page (for the app I'm working on) and they seem to have the most graceful solution. Thoughts on how they did this?

    Read the article

  • Conditional OrderBy

    - by jeriley
    So, right now I've got a number of columns the user can sort by (Name, County, Active) and that's easy but messy. Looks something like this... Select Case e.SortExpression Case "Name" If (isDescending) Then resultsList.OrderByDescending(Function(a) a.Name).ToList() Else resultsList.OrderBy(Function(a) a.Name).ToList() End If Case "County" ... and so on what I would LIKE to do, is something more ... graceful, like this Private Function SortThatList(ByVal listOfStuff As List(Of Stuff), ByVal isDescending As Boolean, ByVal expression As Func(Of Stuff)) As List(Of Stuff) If (isDescending) Then Return listOfStuff.OrderByDescending(expression) Else : Return listOfStuff.OrderBy(expression) End If End Function but it doesn't like the datatype (Of TKey) ... I've tired Func(Of stuff, boolean) (got something in c# that works nicely like that) but can't seem to get this one to do what I want. Ideas? What's the magic syntax?

    Read the article

  • Apache/passenger/ree doesn't interpret .rb files

    - by Sergey
    I'm trying to get apache + passenger + ree to work. I think I did everything (except for setting up rails env - for now I wanna run just pure ruby) described here: http://rvm.beginrescueend.com/integration/passenger/ But when I try to go to localhost/test.rb it doesn't interpret that file and just download it. I don't know where should I look for mistakes, so here are a few files I think could be relevant: /var/log/apache2/error.log (these 2 lines are repeating) [Mon May 31 23:12:47 2010] [notice] Graceful restart requested, doing restart [Mon May 31 23:12:48 2010] [notice] Apache/2.2.14 (Ubuntu) PHP/5.3.2-1ubuntu4.2 with Suhosin-Patch Phusion_Passenger/2.2.11 configured -- resuming normal operations /etc/apache2/httpd.conf LoadModule passenger_module /home/sergey/.rvm/gems/ree-1.8.7-2010.01/gems/passenger-2.2.11/ext/apache2/mod_passenger.so PassengerRoot /home/sergey/.rvm/gems/ree-1.8.7-2010.01/gems/passenger-2.2.11 PassengerRuby /home/sergey/.rvm/bin/passenger_ruby /var/www/test.rb puts "test"

    Read the article

  • Large file upload into WSS v3

    - by Rubens Farias
    I'd built an WSSv3 application which upload files in small chunks; when every data piece arrives, I temporarly keep it into a SQL 2005 image data type field for performance reasons**. Problem come when upload ends; I need to move data from my SQL Server to Sharepoint Document Library through WSSv3 object model. Right now, I can think two approaches: SPFileCollection.Add(string, (byte[])reader[0]); // OutOfMemoryException and SPFile file = folder.Files.Add("filename", new byte[]{ }); using(Stream stream = file.OpenBinaryStream()) { // ... init vars and stuff ... while ((bytes = reader.GetBytes(0, offset, buffer, 0, BUFFER_SIZE)) 0) { stream.Write(buffer, 0, (int)bytes); // Timeout issues } file.SaveBinary(stream); } Are there any other way to complete successfully this task? ** Performance reasons: if you tries to write every chunk directly at Sharepoint, you'll note a performance degradation as file grows up (100Mb).

    Read the article

  • 32-bit oracle 10g client to 64-bit oracle 10g server

    - by Dakshin
    Due to a 3rd party application's requirement, I may be forced to use 32-bit client of Oracle 10gR2 on the application server to connect to a 64-bit DB server oracle 10gR2 (10.2.0.4.0 - 64bit;another box). The OS is SUSE Linux ver 10. Platform is x86. There are no problems connecting to 64-bit DB server via 32-bit client. I have tested this. Does this result in performance degradation? Does Oracle or anyone else has any recommendations about this kind of scenario? Searched the net without much gain. Please help. Thanks.

    Read the article

  • Java Array Comparison

    - by BlairHippo
    Working within Java, let's say I have two objects that, thanks to obj.getClass().isArray(), I know are both arrays. Let's further say that I want to compare those two arrays to each other -- possibly by using Arrays.equals. Is there a graceful way to do this without resorting to a big exhaustive if/else tree to figure out which flavor of Arrays.equals needs to be used? I'm looking for something that's less of an eyesore than this: if (obj1 instanceof byte[] && obj2 instanceof byte[]) { return Arrays.equals((byte[])obj1, (byte[])obj2); } else if (obj1 instanceof boolean[] && obj2 instanceof boolean[]) { ...

    Read the article

  • How do I access variables with hyphenated names in Smarty?

    - by abeger
    I've got a PHP page that parses an XML file with SimpleXml, then passes that object to a Smarty template. My problem is that the XML file has hyphens in its tag names, e.g. video-player. In PHP, this is no problem, I just use $xml->{'video-player'} and everything's fine. Smarty, on the other hand, throws a fit when I try to use that syntax. The only solution I've come up with so far is to use a variable to store the name, e.g., { assign var=name value="video-player" } { $xml->$name } But this isn't terribly graceful to say the least. Is there another, better, approach to referring to a hyphenated variable name in Smarty?

    Read the article

  • Approaches for memcached sessions

    - by Industrial
    Hi everybody, I was thinking about using memcached to store sessions instead of mySQL, which seemed like a good idea, at first. When it comes to the failover part of utilizing memcached servers, It's a bit worrying that my sessions will stop working if the memcached would go offline. It will certainly affect my users. There's a few techniques that we already utilize to reduce failover, including having a pool of servers available to compensate in the event of downtime, utilizing sharding/consistent hashing across the server pool and so on. We would also do some sort of graceful degradation that tells the users that something have gone wrong and they are welcome to login again, in the event of them being kicked out due to memcached server failover. So how does people generally deal with these issues when storing sessions on memcached servers?

    Read the article

  • How can I plot NaN values as a special color with imshow in matplotlib?

    - by Adam Fraser
    example: import numpy as np import matplotlib.pyplot as plt f = plt.figure() ax = f.add_subplot(111) a = np.arange(25).reshape((5,5)).astype(float) a[3,:] = np.nan ax.imshow(a, interpolation='nearest') f.canvas.draw() The resultant image is unexpectedly all blue (the lowest color in the jet colormap). However, if I do the plotting like this: ax.imshow(a, interpolation='nearest', vmin=0, vmax=24) --then I get something better, but the NaN values are drawn the same color as vmin... Is there a graceful way that I can set NaNs to be drawn with a special color (eg: gray or transparent)?

    Read the article

  • Parametrize the WHERE clause?

    - by ControlFlow
    Hi, stackoverflow! I'm need to write an stored procedure for SQL Server 2008 for performing some huge select query and I need filter it results with specifying filtering type via procedure's parameters (parameterize where clause). I found some solutions like this: create table Foo( id bigint, code char, name nvarchar(max)) go insert into Foo values (1,'a','aaa'), (2,'b','bbb'), (3,'c','ccc') go create procedure Bar @FilterType nvarchar(max), @FilterValue nvarchar(max) as begin select * from Foo as f where case @FilterType when 'by_id' then f.id when 'by_code' then f.code when 'by_name' then f.name end = case @FilterType when 'by_id' then cast(@FilterValue as bigint) when 'by_code' then cast(@FilterValue as char) when 'by_name' then @FilterValue end end go exec Bar 'by_id', '1'; exec Bar 'by_code', 'b'; exec Bar 'by_name', 'ccc'; But it doesn't work when the columns has different data types... It's possible to cast all the columns to nvarchar(max) and compare they as strings, but I think it will cause a performance degradation... Is it possible to parameterize where clause in stored procedure without using things like EXEC sp_executesql (dynamic SQL and etc.)?

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • Python: using a regular expression to match one line of HTML

    - by skylarking
    This simple Python method I put together just checks to see if Tomcat is running on one of our servers. import urllib2 import re import sys def tomcat_check(): tomcat_status = urllib2.urlopen('http://10.1.1.20:7880') results = tomcat_status.read() pattern = re.compile('<body>Tomcat is running...</body>',re.M|re.DOTALL) q = pattern.search(results) if q == []: notify_us() else: print ("Tomcat appears to be running") sys.exit() If this line is not found : <body>Tomcat is running...</body> It calls : notify_us() Which uses SMTP to send an email message to myself and another admin that Tomcat is no longer runnning on the server... I have not used the re module in Python before...so I am assuming there is a better way to do this... I am also open to a more graceful solution with Beautiful Soup ... but haven't used that either.. Just trying to keep this as simple as possible...

    Read the article

  • Checking COM Port Availability in C#

    - by Jim Fell
    Hello. My C# application populates a comboBox with the COM ports found on the system. I would like the mark the COM ports that are in use as such. I know that I can use try / catch blocks to attempt to open every COM port, but I was wondering if there is a more graceful way to do this. Perhaps using a WMI query? I am using Microsoft Visual C# 2008 Express Edition (.NET 2.0). Any thoughts or suggestions you may have would be appreciated. Thanks.

    Read the article

  • What to monitor on MSSQL Server

    - by user361434
    Hi all I have been asked to monitor MSSQL Server (2005 & 2008) and am wondering what are good metrics to look at? I can access WMI counters but am slightly lost as to how much depth is going to be useful. Currently I have on my list: user connections logins per second latch waits per second total latch wait time dead locks per second errors per second Log and data file sizes I am looking to be able to monitor values that will indicate a degradation of performance on the machine or a potential serious issue. To this end I am also wondering at what values some of these things would be considered normal vs problematic? As I reckon it would probably be a really good question to have answered for the general community I thought i'd court some of you DBA experts out there (I am certainly not one of them!) Apologies if a rather open ended question. Ry

    Read the article

  • azure performance

    - by Dave K
    I've moved my app from a dedicated server to azure (and sql azure), and have noticed substantial performance degradation. obviously not having the database and web server on the same piece of hardware is much of it, but I'm curious what other people have found in migrating to azure, and if there is anything any of you would suggest I do to improve it. Right now I'm considering moving back to my dedicated server... So in summary, are there any rules of thumb for this, existing research (wasn't able to find much) or other pieces of advice on improving the performance of the app? has anyone else found the same to be true, and improved their site's performance in some way? it's built in C# on asp.net mvc 2. Thanks!

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11  | Next Page >