Search Results

Search found 7391 results on 296 pages for 'record locking'.

Page 104/296 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • Can I use ext3 as a shared fs if I add a lock manager ?

    - by edomaur
    I need a cluster filesystem for an iSCSI device. The problem is that the servers to which it is connected generate datafiles which must be read by every other servers. Except for the writing and deleting of such files, I do not need a full locking scheme like in OCFS2 or GFS2. So, can I use a distributed lock manager (DLM) on top of an ext3 filesystem or must I use only specialized filesystem ?

    Read the article

  • Clicking sound on MacBook Pro 13" 2010 model when turning the laptop on it's horizontal axis

    - by roosteronacid
    When I leave my MacBook Pro 13" level for a minute or two (sometimes I only need to keep it level for a few seconds), and then pick it up and turn it on it's horizontal axis, I hear a single click, coming from either the hard drive or the Super Drive. Is the click I am hearing some sort of locking mechanism in the hard drive? Or in the Super Drive?.. And therefore nothing to worry about. Or is either my Super Drive or hard drive faulty?

    Read the article

  • Is there any way to execute something when closing the laptop's lid?

    - by Matias
    I'm wondering if there is any way to execute a program, run some command or anything else when closing the laptop's lid. My question aims to be generic, both for Windows and Linux. Something useful should be locking the laptop when closing the lid (Win + L in windows), or running expensive processes such as antivirus analysis etc. without doing it manually before closing the lid.

    Read the article

  • VPN Router drops connections

    - by Kathleen
    We are using a Netgear VPN Firewall Router FVS318v3.. We have a DSL connection here at the office, and my users complain of the connection dropping, or locking up on their end.. Latest firmware is installed, rebooting the router doesn't help, no other internet connectivity issues reported anywhere else (ie., no other internet connection problems here, or at their end, except when connecting to the VPN..) Is there anything else I can check to pinpoint this problem? TIA!!!

    Read the article

  • clients auto-lock feature for inactvity timeout not working

    - by Swaminathan Shanmugam
    In our sbs 2003 domain environment, the clients' pc's inactive for default period will be locked out automatically and only Ctrl+ Alt + Del & client password combination will unlock the client's pcs. Recently around 9 months before, all our client's pc's joined the new sbs 2011 but (usually all are locking with Win+L key combination manually)the auto lock feature is not working from the beginning onwards. Now only I am brought up with this issue by clients. Please help me set that option!

    Read the article

  • How do I remotely enable the firewall on Server 2008 to exclude specific IP addresses?

    - by Guy
    Previously I was working with Server 2003 and managed to lock myself out of the server (I was accessing it remotely) by enabling the firewall. I want to remotely enable the firewall on Server 2008 without locking myself out of the server (access via RDP) and then selectively add IP addresses to the firewall to exclude. i.e. block specific IP addresses. Are there any step by step instructions on how to safely do this?

    Read the article

  • Windows XP: How to delete files that cannot be deleted?

    - by glenneroo
    I have a backup copy of my previous Documents and Settings folder which only contains my original user and within that, 2 directories (Favorites and Local Settings) which are visible in cmd shell but when I try to delete them, Windows gives me this error: If I try to delete the Documents and Settings folder, I receive this warning: I tried doing this in a cmd shell: attrib *.* -r -a -s -h /s But it did not help, nor did it return any errors/warnings. Unlocker 1.8.5 returns: No Locking handle found. Any ideas?

    Read the article

  • Are there any well known anti-patterns in the field of system administration?

    - by ojblass
    I know a few common patterns that seem to bedevil nearly every project at some point in its life cycle: Inability to take outages Third party components locking out upgrades Non uniform environments Lack of monitoring and alerting Missing redundancy Lack of Capacity Poor Change Management Too liberal or tight access policies Organizational changes adversely blur infrastructure ownership I was hoping there is some well articulated library of these anti-patterns summarized in a book or web site. I am almost positive that many organizations are learning through trial by fire methods. If not let's start one.

    Read the article

  • Is it safe to run two instances of svnserve on one repository, or only one?

    - by fredden
    We've two nodes running heartbeat/drbd, and one of the services we're using is subversion. What I want to know is: is it safe to run svnserve on both nodes all the time, or should it only run on the active node? Does svnserve use file-level locking, or is it all in memory? What are the implications of running svnserve without its repositories accessible? Please let me know if this isn't clear, and I'll try my best to rephrase/clarify. :)

    Read the article

  • Excel freezes when copying / cutting to paste elsewhere

    - by Barry
    When cutting/copying some cells to paste them into another sheet/page, sometimes Excel freezes/locks up and fades out. At the top toolbar it says in brackets "not responding". Eventually, I must click 'X' to close the program. It offers to wait for the program to respond, but never does – it just does nothing until I finally close it, where it offers to recover files etc. Is there an issue with memory here? What can I do to stop it locking up?

    Read the article

  • compare windows server for patch/update/hotfix installs

    - by user12002221
    Are there any tools that can be used to connect to windows 2008 servers, and get a comparison of the installed patches/updates on the servers, showing what is installed on one and not on the other? This is to help isolate an issue we are seeing on a specific windows server, in a load balanced setup. There is a certain performance/locking issue, which is mitigated whenever one of the servers is disabled. Please share, if you have any suggestions. Thanks in advance!

    Read the article

  • Prevent being locked out [duplicate]

    - by Nick
    This question already has an answer here: How do you test iptables rules to prevent remote lockout and check matches? 3 answers When you are configuring iptables or ssh over ssh and the data center is thousands of kilometers away(and getting someone there to plug in a KVM is hard) what are some standard practices to prevent locking yourself out?

    Read the article

  • SimpleXMLElement empty object

    - by Mike
    Hi, I am trying to parse an xml file using XmlReader but although I am getting a return from the xml file for the (commission) node for some reason I am getting an empty SimpleXMLElement Object returned as well. I don't know if its something to do with while loop,switch or something I missed in the parse setup. This is the xml file I am trying to read from, as you can see there is only 1 result returned: <?xml version="1.0" encoding="UTF-8"?> <cj-api> <commissions total-matched="1"> <commission> <action-status> new </action-status> <action-type> lead </action-type> <aid> 10730981 </aid> <commission-id> 1021015513 </commission-id> <country> </country> <event-date> 2010-05-08T08:08:55-0700 </event-date> <locking-date> 2010-06-10 </locking-date> <order-id> 345007 </order-id> <original> true </original> <original-action-id> 787692438 </original-action-id> <posting-date> 2010-05-08T10:01:22-0700 </posting-date> <website-id> 3201921 </website-id> <cid> 2815954 </cid> <advertiser-name> SPS EurosportBET </advertiser-name> <commission-amount> 0 </commission-amount> <order-discount> 0 </order-discount> <sid> 0 </sid> <sale-amount> 0 </sale-amount> </commission> </commissions> </cj-api> This is my parser: <?php // read $response (xml feed) $file = "datafeed.xml"; $xml = new XMLReader; $xml->open($file); // loop to read in data while ($xml->read()) { switch ($xml->name) { // find the parent node for each commission payment case 'commission': // initalise xml parser $dom = new DomDocument(); $dom_node = $xml ->expand(); $element = $dom->appendChild($dom_node); $dom_string = $dom->saveXML($element); $commission = new SimpleXMLElement($dom_string); // read in data $action_status = $commission->{'action-status'}; $action_type = $commission->{'action-type'}; $aid = $commission->{'aid'}; $commission_id = $commission->{'commission-id'}; $country = $commission->{'country'}; $event_date = $commission->{'event-date'}; $locking_date = $commission->{'locking-date'}; $order_id = $commission->{'order-id'}; $original = $commission->{'original'}; $original_action_id = $commission->{'original_action-id'}; $posting_date = $commission->{'posting-date'}; $website_id = $commission->{'website-id'}; $cid = $commission->{'cid'}; $advertiser_name = $commission->{'advertiser-name'}; $commission_amount = $commission->{'commission-amount'}; $order_discount = $commission->{'order-discount'}; $sid = $commission->{'sid'}; $sale_amount = $commission->{'sale-amount'}; print_r($aid); break; } } ?> The result is : SimpleXMLElement Object ( [0] => 10730981 ) SimpleXMLElement Object ( ) Why is it returning the second object: SimpleXMLElement Object ( ) and what do I need to do correct it? Thanks.

    Read the article

  • Oracle Data Mining a Star Schema: Telco Churn Case Study

    - by charlie.berger
    There is a complete and detailed Telco Churn case study "How to" Blog Series just posted by Ari Mozes, ODM Dev. Manager.  In it, Ari provides detailed guidance in how to leverage various strengths of Oracle Data Mining including the ability to: mine Star Schemas and join tables and views together to obtain a complete 360 degree view of a customer combine transactional data e.g. call record detail (CDR) data, etc. define complex data transformation, model build and model deploy analytical methodologies inside the Database  His blog is posted in a multi-part series.  Below are some opening excerpts for the first 3 blog entries.  This is an excellent resource for any novice to skilled data miner who wants to gain competitive advantage by mining their data inside the Oracle Database.  Many thanks Ari! Mining a Star Schema: Telco Churn Case Study (1 of 3) One of the strengths of Oracle Data Mining is the ability to mine star schemas with minimal effort.  Star schemas are commonly used in relational databases, and they often contain rich data with interesting patterns.  While dimension tables may contain interesting demographics, fact tables will often contain user behavior, such as phone usage or purchase patterns.  Both of these aspects - demographics and usage patterns - can provide insight into behavior.Churn is a critical problem in the telecommunications industry, and companies go to great lengths to reduce the churn of their customer base.  One case study1 describes a telecommunications scenario involving understanding, and identification of, churn, where the underlying data is present in a star schema.  That case study is a good example for demonstrating just how natural it is for Oracle Data Mining to analyze a star schema, so it will be used as the basis for this series of posts...... Mining a Star Schema: Telco Churn Case Study (2 of 3) This post will follow the transformation steps as described in the case study, but will use Oracle SQL as the means for preparing data.  Please see the previous post for background material, including links to the case study and to scripts that can be used to replicate the stages in these posts.1) Handling missing values for call data recordsThe CDR_T table records the number of phone minutes used by a customer per month and per call type (tariff).  For example, the table may contain one record corresponding to the number of peak (call type) minutes in January for a specific customer, and another record associated with international calls in March for the same customer.  This table is likely to be fairly dense (most type-month combinations for a given customer will be present) due to the coarse level of aggregation, but there may be some missing values.  Missing entries may occur for a number of reasons: the customer made no calls of a particular type in a particular month, the customer switched providers during the timeframe, or perhaps there is a data entry problem.  In the first situation, the correct interpretation of a missing entry would be to assume that the number of minutes for the type-month combination is zero.  In the other situations, it is not appropriate to assume zero, but rather derive some representative value to replace the missing entries.  The referenced case study takes the latter approach.  The data is segmented by customer and call type, and within a given customer-call type combination, an average number of minutes is computed and used as a replacement value.In SQL, we need to generate additional rows for the missing entries and populate those rows with appropriate values.  To generate the missing rows, Oracle's partition outer join feature is a perfect fit.  select cust_id, cdre.tariff, cdre.month, minsfrom cdr_t cdr partition by (cust_id) right outer join     (select distinct tariff, month from cdr_t) cdre     on (cdr.month = cdre.month and cdr.tariff = cdre.tariff);   ....... Mining a Star Schema: Telco Churn Case Study (3 of 3) Now that the "difficult" work is complete - preparing the data - we can move to building a predictive model to help identify and understand churn.The case study suggests that separate models be built for different customer segments (high, medium, low, and very low value customer groups).  To reduce the data to a single segment, a filter can be applied: create or replace view churn_data_high asselect * from churn_prep where value_band = 'HIGH'; It is simple to take a quick look at the predictive aspects of the data on a univariate basis.  While this does not capture the more complex multi-variate effects as would occur with the full-blown data mining algorithms, it can give a quick feel as to the predictive aspects of the data as well as validate the data preparation steps.  Oracle Data Mining includes a predictive analytics package which enables quick analysis. begin  dbms_predictive_analytics.explain(   'churn_data_high','churn_m6','expl_churn_tab'); end; /select * from expl_churn_tab where rank <= 5 order by rank; ATTRIBUTE_NAME       ATTRIBUTE_SUBNAME EXPLANATORY_VALUE RANK-------------------- ----------------- ----------------- ----------LOS_BAND                                      .069167052          1MINS_PER_TARIFF_MON  PEAK-5                   .034881648          2REV_PER_MON          REV-5                    .034527798          3DROPPED_CALLS                                 .028110322          4MINS_PER_TARIFF_MON  PEAK-4                   .024698149          5From the above results, it is clear that some predictors do contain information to help identify churn (explanatory value > 0).  The strongest uni-variate predictor of churn appears to be the customer's (binned) length of service.  The second strongest churn indicator appears to be the number of peak minutes used in the most recent month.  The subname column contains the interior piece of the DM_NESTED_NUMERICALS column described in the previous post.  By using the object relational approach, many related predictors are included within a single top-level column. .....   NOTE:  These are just EXCERPTS.  Click here to start reading the Oracle Data Mining a Star Schema: Telco Churn Case Study from the beginning.    

    Read the article

  • Slides and Code from my Silverlight MVVM Talk at DevConnections

    - by dwahlin
    I had a great time at the DevConnections conference in Las Vegas this year where Visual Studio 2010 and Silverlight 4 were launched. While at the conference I had the opportunity to give a full-day Silverlight workshop as well as 4 different talks and met a lot of people developing applications in Silverlight. I also had a chance to appear on a live broadcast of Channel 9 with John Papa, Ward Bell and Shawn Wildermuth, record a video with Rick Strahl covering jQuery versus Silverlight and record a few podcasts on Silverlight and ASP.NET MVC 2.  It was a really busy 4 days but I had a lot of fun chatting with people and hearing about different business problems they were solving with ASP.NET and/or Silverlight. Thanks to everyone who attended my sessions and took the time to ask questions and stop by to talk one-on-one. One of the talks I gave covered the Model-View-ViewModel pattern and how it can be used to build architecturally sound applications. Topics covered in the talk included: Understanding the MVVM pattern Benefits of the MVVM pattern Creating a ViewModel class Implementing INotifyPropertyChanged in a ViewModelBase class Binding a ViewModel declaratively in XAML Binding a ViewModel with code ICommand and ButtonBase commanding support in Silverlight 4 Using InvokeCommandBehavior to handle additional commanding needs Working with ViewModels and Sample Data in Blend Messaging support with EventBus classes, EventAggregator and Messenger My personal take on code in a code-beside file (I’m all in favor of it when used appropriately for message boxes, child windows, animations, etc.) One of the samples I showed in the talk was intended to teach all of the concepts mentioned above while keeping things as simple as possible.  The sample demonstrates quite a few things you can do with Silverlight and the MVVM pattern so check it out and feel free to leave feedback about things you like, things you’d do differently or anything else. MVVM is simply a pattern, not a way of life so there are many different ways to implement it. If you’re new to the subject of MVVM check out the following resources. I wish this talk would’ve been recorded (especially since my live and canned demos all worked :-)) but these resources will help get you going quickly. Getting Started with the MVVM Pattern in Silverlight Applications Model-View-ViewModel (MVVM) Explained Laurent Bugnion’s Excellent Talk at MIX10     Download sample code and slides from my DevConnections talk     For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit http://www.thewahlingroup.com.

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >