Search Results

Search found 8125 results on 325 pages for 'metadata cache'.

Page 73/325 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Error when I try to installth desktop integration features for Openoffice

    - by PENG TENG
    peng@peng-ThinkPad-SL410:~$ cd '/home/peng/Downloads/en-US/DEBS/desktop-integration' peng@peng-ThinkPad-SL410:~/Downloads/en-US/DEBS/desktop-integration$ sudo dpkg -i *.deb (Reading database ... 357248 files and directories currently installed.) Unpacking openoffice.org-debian-menus (from openoffice.org3.4-debian-menus_3.4-9593_all.deb) ... dpkg: error processing openoffice.org3.4-debian-menus_3.4-9593_all.deb (--install): trying to overwrite '/usr/bin/soffice', which is also in package libreoffice-common 1:3.6.2~rc2-0ubuntu3 /usr/bin/gtk-update-icon-cache gtk-update-icon-cache: Cache file created successfully. /usr/bin/gtk-update-icon-cache gtk-update-icon-cache: Cache file created successfully. Processing triggers for menu ... Processing triggers for hicolor-icon-theme ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: openoffice.org3.4-debian-menus_3.4-9593_all.deb Can anyone solve the problem?

    Read the article

  • Inside the DLR – Invoking methods

    - by Simon Cooper
    So, we’ve looked at how a dynamic call is represented in a compiled assembly, and how the dynamic lookup is performed at runtime. The last piece of the puzzle is how the resolved method gets invoked, and that is the subject of this post. Invoking methods As discussed in my previous posts, doing a full lookup and bind at runtime each and every single time the callsite gets invoked would be far too slow to be usable. The results obtained from the callsite binder must to be cached, along with a series of conditions to determine whether the cached result can be reused. So, firstly, how are the conditions represented? These conditions can be anything; they are determined entirely by the semantics of the language the binder is representing. The binder has to be able to return arbitary code that is then executed to determine whether the conditions apply or not. Fortunately, .NET 4 has a neat way of representing arbitary code that can be easily combined with other code – expression trees. All the callsite binder has to return is an expression (called a ‘restriction’) that evaluates to a boolean, returning true when the restriction passes (indicating the corresponding method invocation can be used) and false when it does’t. If the bind result is also represented in an expression tree, these can be combined easily like so: if ([restriction is true]) { [invoke cached method] } Take my example from my previous post: public class ClassA { public static void TestDynamic() { CallDynamic(new ClassA(), 10); CallDynamic(new ClassA(), "foo"); } public static void CallDynamic(dynamic d, object o) { d.Method(o); } public void Method(int i) {} public void Method(string s) {} } When the Method(int) method is first bound, along with an expression representing the result of the bind lookup, the C# binder will return the restrictions under which that bind can be reused. In this case, it can be reused if the types of the parameters are the same: if (thisArg.GetType() == typeof(ClassA) && arg1.GetType() == typeof(int)) { thisClassA.Method(i); } Caching callsite results So, now, it’s up to the callsite to link these expressions returned from the binder together in such a way that it can determine which one from the many it has cached it should use. This caching logic is all located in the System.Dynamic.UpdateDelegates class. It’ll help if you’ve got this type open in a decompiler to have a look yourself. For each callsite, there are 3 layers of caching involved: The last method invoked on the callsite. All methods that have ever been invoked on the callsite. All methods that have ever been invoked on any callsite of the same type. We’ll cover each of these layers in order Level 1 cache: the last method called on the callsite When a CallSite<T> object is first instantiated, the Target delegate field (containing the delegate that is called when the callsite is invoked) is set to one of the UpdateAndExecute generic methods in UpdateDelegates, corresponding to the number of parameters to the callsite, and the existance of any return value. These methods contain most of the caching, invoke, and binding logic for the callsite. The first time this method is invoked, the UpdateAndExecute method finds there aren’t any entries in the caches to reuse, and invokes the binder to resolve a new method. Once the callsite has the result from the binder, along with any restrictions, it stitches some extra expressions in, and replaces the Target field in the callsite with a compiled expression tree similar to this (in this example I’m assuming there’s no return value): if ([restriction is true]) { [invoke cached method] return; } if (callSite._match) { _match = false; return; } else { UpdateAndExecute(callSite, arg0, arg1, ...); } Woah. What’s going on here? Well, this resulting expression tree is actually the first level of caching. The Target field in the callsite, which contains the delegate to call when the callsite is invoked, is set to the above code compiled from the expression tree into IL, and then into native code by the JIT. This code checks whether the restrictions of the last method that was invoked on the callsite (the ‘primary’ method) match, and if so, executes that method straight away. This means that, the next time the callsite is invoked, the first code that executes is the restriction check, executing as native code! This makes this restriction check on the primary cached delegate very fast. But what if the restrictions don’t match? In that case, the second part of the stitched expression tree is executed. What this section should be doing is calling back into the UpdateAndExecute method again to resolve a new method. But it’s slightly more complicated than that. To understand why, we need to understand the second and third level caches. Level 2 cache: all methods that have ever been invoked on the callsite When a binder has returned the result of a lookup, as well as updating the Target field with a compiled expression tree, stitched together as above, the callsite puts the same compiled expression tree in an internal list of delegates, called the rules list. This list acts as the level 2 cache. Why use the same delegate? Stitching together expression trees is an expensive operation. You don’t want to do it every time the callsite is invoked. Ideally, you would create one expression tree from the binder’s result, compile it, and then use the resulting delegate everywhere in the callsite. But, if the same delegate is used to invoke the callsite in the first place, and in the caches, that means each delegate needs two modes of operation. An ‘invoke’ mode, for when the delegate is set as the value of the Target field, and a ‘match’ mode, used when UpdateAndExecute is searching for a method in the callsite’s cache. Only in the invoke mode would the delegate call back into UpdateAndExecute. In match mode, it would simply return without doing anything. This mode is controlled by the _match field in CallSite<T>. The first time the callsite is invoked, _match is false, and so the Target delegate is called in invoke mode. Then, if the initial restriction check fails, the Target delegate calls back into UpdateAndExecute. This method sets _match to true, then calls all the cached delegates in the rules list in match mode to try and find one that passes its restrictions, and invokes it. However, there needs to be some way for each cached delegate to inform UpdateAndExecute whether it passed its restrictions or not. To do this, as you can see above, it simply re-uses _match, and sets it to false if it did not pass the restrictions. This allows the code within each UpdateAndExecute method to check for cache matches like so: foreach (T cachedDelegate in Rules) { callSite._match = true; cachedDelegate(); // sets _match to false if restrictions do not pass if (callSite._match) { // passed restrictions, and the cached method was invoked // set this delegate as the primary target to invoke next time callSite.Target = cachedDelegate; return; } // no luck, try the next one... } Level 3 cache: all methods that have ever been invoked on any callsite with the same signature The reason for this cache should be clear – if a method has been invoked through a callsite in one place, then it is likely to be invoked on other callsites in the codebase with the same signature. Rather than living in the callsite, the ‘global’ cache for callsite delegates lives in the CallSiteBinder class, in the Cache field. This is a dictionary, typed on the callsite delegate signature, providing a RuleCache<T> instance for each delegate signature. This is accessed in the same way as the level 2 callsite cache, by the UpdateAndExecute methods. When a method is matched in the global cache, it is copied into the callsite and Target cache before being executed. Putting it all together So, how does this all fit together? Like so (I’ve omitted some implementation & performance details): That, in essence, is how the DLR performs its dynamic calls nearly as fast as statically compiled IL code. Extensive use of expression trees, compiled to IL and then into native code. Multiple levels of caching, the first of which executes immediately when the dynamic callsite is invoked. And a clever re-use of compiled expression trees that can be used in completely different contexts without being recompiled. All in all, a very fast and very clever reflection caching mechanism.

    Read the article

  • How to cache authentication in Linux using PAM/Kerberos authentication (for CVS)?

    - by Calonthar
    We have several Linux servers that authenticate Linux user passwords on our Windows Active Directory Server using PAM and Kerberos 5. The Linux distro we use is CentOS 6. On one system, we have several Version Control Systems like CVS and Subversion, both of which authenticate users throug PAM, such that users can use their normal Unix resp. Windows AD accounts. Since we started using Kerberos for password authentication, we experienced that CVS on a client machine is often much slower in establishing a connection. CVS authenticates the user on every request (eg. cvs diff, log, update...). Is is possible to cache the credentials that kerberos uses, sucht that is does not need to ask the Windows AD server every time a user executes a cvs action? Our PAM config /etc/pam.d/system-auth looks like the following: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so broken_shadow account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Strange issue ! Local network cache of PHP and Apache2 on Win Server 2008 R2

    - by Ahmed Benlahsen
    Software configuration : I have a new Server with windows server 2008 R2 installed via VMWare. I have installed Apache2.2, PHP5.2 and MySQL5.5 as separated packages. Issue : On my first installation of my application all works great. When I updated some JS and CSS files then I access to my application again from a PC on local network I get the old JS and CSS versions! But when I access to the same application on local server I got the latest versions of those files! Link of my application on local server is : http://localhost/BADIL Link of my application from local network is : http://LOCAL_SERVER_IP/BADIL I never had this kind of issue! I think that there are some cache but I don't know where! Maybe on Win Server 2008 R2 or on VMWare ! The question is : Why when I access to my application on the server all works fine, but when I access to the same application from a local network I have the old version of JS and CSS files?? Any one can help me please?! Regards.

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • Manually filling opcode cache for entire app using apc_compile_file, then switching to new release.

    - by Ben
    Does anyone have a great system, or any ideas, for doing as the title says? I want to switch production version of web app-- written in PHP and served by Apache-- from release 1234 to release 1235, but before that happens, have all files already in the opcode cache (APC). Then after the switch, remove the old cache entries for files from release 1234. As far as I can think of there are three easy ways of atomically switching from one version to the next. Have a symbolic link, for example /live, that is always the document root but is changed to point from one version to the next. Similarly, have a directory /live that is always the document root, but use mv live oldversion && mv newversion live to switch to new version. Edit apache configuration to change the document root to newversion, then restart apache. I think it is preferable not to have to do 3, but I can't think of anyway to precompile all php files AND use 1 or 2 to switch release. So can someone either convince me its okay to rely on option 3, or tell me how to work with 1 or 2, or reveal some other option I am not thinking of?

    Read the article

  • Does IE completely ignore cache control headers for AJAX requests?

    - by Joshua Hayworth
    Hello there, I've got, what I would consider, a simple test web site. A single page with a single button. Here is a copy of the source I'm working with if you would like to download it and play with it. When that button is clicked, it creates a JavaScript timer that executes once a second. When the timer function is executed, An AJAX call is made to retrieve a text value. That text value is then placed into the DOM. What's my problem? IE Caching. Crack open Task Manager and watch what happens to the iexplorer.exe process (IE 8.0.7600.16385 for me) while the timer in that page is executing. See the memory and handle count getting larger? Why is that happening when, by all accounts, I have caching turned off. I've got the jQuery cache option set to false in $.ajaxSetup. I've got the CacheControl header set to no-cache and no-store. The Expires header is set to DateTime.Now.AddDays(-1). The headers are set in both the page code-behind as well as the HTTP Handler's response. Anybody got any ideas as to how I could prevent IE from caching the results of the AJAX call? Here is what the iexplorer.exe process looks like in ProcessMonitor. I believe that the activity shown in this picture is exactly what I'm attempting to prevent.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • SUSE EC2 Problem - zypper - Permission denied

    - by phuu
    Hi. I'm trying to use zypper to install gcc on my Amazon EC2 instance running SUSE.When I try:zypper in gcc I get: Retrieving repository 'SLE11-SDK-SP1' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLE11-SDK-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1' metadata [error] Repository 'SLE11-SDK-SP1' is invalid. Can't provide /media.1/media : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1' because of the above error. Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [|] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLE11-SDK-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): i Retrieving repository 'SLE11-SDK-SP1-Updates' metadata [error] Repository 'SLE11-SDK-SP1-Updates' is invalid. Can't provide /repodata/repomd.xml : User-requested skipping of a file Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLE11-SDK-SP1-Updates' because of the above error. Retrieving repository 'SLES11-Extras' metadata [/] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): r Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-Extras/sle-11-i586/repodata/repomd.xml' denied. Abort, retry, ignore? [a/r/i/?] (a): zypper in gcc Invalid answer 'zypper in gcc'. [a/r/i/?] (a): a Retrieving repository 'SLES11-Extras' metadata [error] Repository 'SLES11-Extras' is invalid. Can't provide /repodata/repomd.xml : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-Extras' because of the above error. Retrieving repository 'SLES11-SP1' metadata [-] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/install/SLES11-SP1/sle-11-i586/media.1/media' denied. Abort, retry, ignore? [a/r/i/?] (a): a Retrieving repository 'SLES11-SP1' metadata [error] Repository 'SLES11-SP1' is invalid. Can't provide /media.1/media : Please check if the URIs defined for this repository are pointing to a valid repository. Warning: Disabling repository 'SLES11-SP1' because of the above error. Retrieving repository 'SLES11-SP1-Updates' metadata [] Permission to access 'http://eu-west-1-ec2-update.susecloud.net/repo/update/SLES11-SP1-Updates/sle-11-i586/repodata/repomd.xml' denied. I've search for the problem and this thread came up, but offered no solutions.I've triedsces-activate. Am I doing something wrong? I should say I'm very new to this, and I admit I don't really know what I'm doing, but I'm trying to learn about setting up and running a server and so I thought I'd throw myself in at the deep(ish) end. Thanks for reading.

    Read the article

  • Squid proxy not serving modified html content

    - by Matthew
    I'm trying to use squid to modify the page content of web page requests. I followed the upside-down-ternet tutorial which showed instructions for how to flip images on pages. I need to change the actual html of the page. I've been trying to do the same thing as in the tutorial, but instead of editing the image I'm trying to edit the html page. Below is a php script I'm using to try to do it. All jpg images get flipped, but the content on the page does not get edited. The edited index.html files written contain the edited content, but the pages the users receive don't contain the edited content. #!/usr/bin/php <?php $temp = array(); while ( $input = fgets(STDIN) ) { $micro_time = microtime(); // Split the output (space delimited) from squid into an array. $temp = split(' ', $input); //Flip jpg images, this works correctly if (preg_match("/.*\.jpg/i", $temp[0])) { system("/usr/bin/wget -q -O /var/www/cache/$micro_time.jpg ". $temp[0]); system("/usr/bin/mogrify -flip /var/www/cache/$micro_time.jpg"); echo "http://127.0.0.1/cache/$micro_time.jpg\n"; } //Don't edit files that are obviously not html. $temp[0] contains url of file to get elseif (preg_match("/(jpg|png|gif|css|js|\(|\))/i", $temp[0], $matches)) { echo $input; } //Otherwise, could be html (e.g. `wget http://www.google.com` downloads index.html) else{ $time = time() . microtime(); //For unique directory names $time = preg_replace("/ /", "", $time); //Simplify things by removing the spaces mkdir("/var/www/cache/". $time); //Create unique folder system("/usr/bin/wget -q --directory-prefix=\"/var/www/cache/$time/\" ". $temp[0]); $filename = system("ls /var/www/cache/$time/"); //Get filename of downloaded file //File is html, edit the content (this does not work) if(preg_match("/.*\.html/", $filename)){ //Get the html file contents $contentfh = fopen("/var/www/cache/$time/". $filename, 'r'); $content = fread($contentfh, filesize("/var/www/cache/$time/". $filename)); fclose($contentfh); //Edit the html file contents $content = preg_replace("/<\/body>/i", "<!-- content served by proxy --></body>", $content); //Write the edited file $contentfh = fopen("/var/www/cache/$time/". $filename, 'w'); fwrite($contentfh, $content); fclose($contentfh); //Return the edited page echo "http://127.0.0.1/cache/$time/$filename\n"; } //Otherwise file is not html, don't edit else{ echo $input; } } } ?>

    Read the article

  • systemstate dump ??

    - by JaneZhang(???)
            ???????????????hang????,????????systemstate dump?????????,?????,????????,???????????????,????systemstate dump?????????????       ??????,????????systemstate dump, ?????“WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK”?        systemstate dump???????????,??????:??????,???????,????dump????????,???????M????)1. ?sysdba???????:$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;SQL>oradebug tracefile_name;==>????????2. ????systemstate dump,??????hang analyze??????????????????$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3SQL>oradebug tracefile_name;==>??????????RAC???,????????????systemstate dump,???????????(?????????):$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all dump systemstate 266  <==-g all ??????????dump?1~2??SQL>oradebug -g all dump systemstate 266?1~2??SQL>oradebug -g all dump systemstate 266?RAC???hang analyze:SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?????????????????systemstate dump,?????????????backgroud_dump_dest??diag trace???????????????????????????,?????hang?,?????systemstate dump?????:10:   dump11:   dump + global cache of RAC256: short stack (????)258: dump(???lock element) + short stack (????)266: 256+10 -->short stack+ dump267: 256+11 -->short stack+ dump + global cache of RAClevel 11? 267? dump global cache, ??????trace ??,??????????????,????????,???266,??????dump?????????,????????????????????short stack????,???????,??2000???,??????30??????????,????level 10 ?? level 258, level 258 ? level 10????short short stack, ??level 10?????lock element data.?????systemstate dump???,??????level?????:??????37???:-rw-r----- 1 oracle oinstall    72721 Aug 31 21:50 rac10g2_ora_31092.trc==>256 (short stack, ????2K)-rw-r----- 1 oracle oinstall  2724863 Aug 31 21:52 rac10g2_ora_31654.trc==>10    (dump,????72K )-rw-r----- 1 oracle oinstall  2731935 Aug 31 21:53 rac10g2_ora_32214.trc==>266 (dump + short stack ,????72K)RAC:-rw-r----- 1 oracle oinstall 55873057 Aug 31 21:49 rac10g2_ora_30658.trc ==>11   (dump+global cache,????1.4M)-rw-r----- 1 oracle oinstall 55879249 Aug 31 21:48 rac10g2_ora_28615.trc ==>267 (dump+global cache+short stack,????1.4M) ??,??????dump global cache(level 11?267,???????????????)??????????,?????????systemstate dump ??

    Read the article

  • SSAS DMVs: useful links

    - by Davide Mauri
    From time to time happens that I need to extract metadata informations from Analysis Services DMVS in order to quickly get an overview of the entire situation and/or drill down to detail level. As a memo I post the link I use most when need to get documentation on SSAS Objects Data DMVs: SSAS: Using DMV Queries to get Cube Metadata http://bennyaustin.wordpress.com/2011/03/01/ssas-dmv-queries-cube-metadata/ SSAS DMV (Dynamic Management View) http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/ Use Dynamic Management Views (DMVs) to Monitor Analysis Services http://msdn.microsoft.com/en-us/library/hh230820.aspx

    Read the article

  • How to create Adhoc workflow in UCM

    - by vijaykumar.yenne
    UCM has an inbuilt workflow engine that can handle document centric workflow approval/rejection process to ensure the right set of assets go into the repository. Anybody who has gone through the documentation is aware that there are two types of work flows that can be defined using the Workflow Admin applet in UCM namely Criteria and Basic While criteria is an Automatic workflow  process based on certain metadata attributes (Security Group and One of the Metadata Fields) , basic workflow is a manual workflow that need to be initiated by the admin. Any workflow  that can be put on the white board can be translated into the UCM wokflow process and there are concepts like sub workflows, tokens, events. idoc scripting that be introduced to handle any kind of complex workflows. There is a specific Workflow Implementation guide that explains the concepts in detail. One of the standard queries i come across is how to handle adhoc workflows where at the time of contributing the content, the contributors would like to decide on the workflow to be initiated and the users to be picked for approval in each step, hence this post.This is what i want to acheive, i would like to display on my Checkin Screen on the kind of workflows that a contributor could choose from:Based on the Workflow the contributor chooses, the other metadata fields (Step One, Step Two and Step Three)  need to be filled in and these fields decide who the approvers are going to be.1. Create a criteria workflow called One_Step_Review2.create two tokens StepOne <$wfAddUser(xWorkflowStepOne, "user")$>,  OrginalAuthor  <$wfAddUser(wfGet("OriginalAuthor"), "user")$>View image3.create two steps in the work flow created (One_Step_Review)View image4. Edit Step1 of the Workflow and add the Step One token and select the review permissionView image5. In the exit conditions tab have atleast One reveiwerView image6. In the events tab add an entry event <$wfSet("OriginalAuthor",dDocAuthor)$> to capture the contributor who shall be notified in the second step of the workflowView image7. Add the second step Notify_Author to the workflow8. Add the original author token to the above step9.  Enable the workflow10. Open the configration manager applet and create a Metadata field Workflow with option list enabled and add the list of values as show hereView image11. Create another metadata field WorkflowStepOne with option list configured to the Users View. This shall display all the users registered with UCM, which when selected shall be associated with the tokens associated with the workflow. Refer the above token.View imageAs indicated in the above steps you could create multiple work flows and associate the custom metadata field values to the tokens so that the contributors can decide who can approve their  content.

    Read the article

  • Ubuntu-one syncs single files, but not directories [closed]

    - by Luiz Cláudio Duarte
    I'm using Ubuntu 10.10, fully updated. I have tried to sync my ~/Documents and ~/Pictures folders; U1 replicates the directory structure, but no files are uploaded. Next I tried to sync a single file inside ~/Ubuntu One and it was synced. Then I tried to put a directory inside ~/Ubuntu One and, again, the directory structure was replicated, but no files were synced. All the files have the "syncing" icon, however. The latest syncdaemon.log is below: 2011-03-30 07:41:50,752 - ubuntuone.SyncDaemon.fsm - INFO - loading updated metadata 2011-03-30 07:41:55,081 - ubuntuone.SyncDaemon.fsm - INFO - initialized: idx_path: 266, idx_node_id: 266, shares: 1 2011-03-30 07:41:55,082 - ubuntuone.SyncDaemon.GeneralINotProc - INFO - Ignoring files: ['\\A#.*\\Z', '\\A.*~\\Z', '\\A.*\\.py[oc]\\Z', '\\A.*\\.sw[nopx]\\Z', '\\A.*\\.swpx\\Z', '\\A\\..*\\.tmp\\Z'] 2011-03-30 07:41:55,083 - ubuntuone.SyncDaemon.HQ - INFO - HashQueue: _hasher started 2011-03-30 07:41:55,902 - ubuntuone.SyncDaemon.DBus - INFO - DBusInterface initialized. 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/Ubuntu One' as root dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/.local/share/ubuntuone/syncdaemon' as data dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - INFO - Using '/home/l_claudius/.local/share/ubuntuone/shares' as shares root dir 2011-03-30 07:41:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'INIT' (queues IDLE connection 'Not User Not Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1 miss=266) ---- 2011-03-30 07:41:55,904 - ubuntuone.SyncDaemon.Main - NOTE - Local rescan starting... 2011-03-30 07:41:55,904 - ubuntuone.SyncDaemon.local_rescan - INFO - start scan all volumes 2011-03-30 07:41:55,906 - ubuntuone.SyncDaemon.local_rescan - INFO - processing trash 2011-03-30 07:41:56,044 - ubuntuone.SyncDaemon.local_rescan - INFO - processing move limbo 2011-03-30 07:41:56,491 - ubuntuone.SyncDaemon.Main - NOTE - Local rescan finished! 2011-03-30 07:41:56,492 - ubuntuone.SyncDaemon.Main - INFO - hash queue empty. We are ready! 2011-03-30 07:42:15,583 - ubuntuone.SyncDaemon.DBus - INFO - u'CredentialsFound': callbacking with credentials (token_name: None). 2011-03-30 07:42:15,584 - ubuntuone.SyncDaemon.DBus - INFO - connect: credential request was successful, pushing SYS_USER_CONNECT. 2011-03-30 07:42:15,617 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection started to host fs-1.one.ubuntu.com, port 443. 2011-03-30 07:42:15,977 - ubuntuone.SyncDaemon.ActionQueue - INFO - Connection made. 2011-03-30 07:42:15,978 - ubuntuone.SyncDaemon.StorageClient - INFO - Connection made. 2011-03-30 07:42:16,581 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'protocol_version' finished OK. 2011-03-30 07:42:16,774 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'caps_raising_if_not_accepted' finished OK. 2011-03-30 07:42:16,966 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'caps_raising_if_not_accepted' finished OK. 2011-03-30 07:42:17,722 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'oauth_authenticate' finished OK. 2011-03-30 07:42:17,723 - ubuntuone.SyncDaemon.ActionQueue - NOTE - Session ID: '563bc960-35fa-4f44-b9b6-125819656dc3' 2011-03-30 07:42:19,258 - ubuntuone.SyncDaemon.ActionQueue - INFO - The request 'list_volumes' finished OK. 2011-03-30 07:43:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:45:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:47:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:49:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ---- 2011-03-30 07:51:55,903 - ubuntuone.SyncDaemon.Main - NOTE - ---- MARK (state: <State: 'QUEUE_MANAGER' (queues IDLE connection 'With User With Network')>; queues: metadata: 0; content: 0; hash: 0, fsm-cache: hit=1059 miss=266) ----

    Read the article

  • How to add new partition to RAID-1 array on Redhat FC10?

    - by Peter Scott
    I have a RH FC10 system with RAID 1 partitions, here is mdadm.conf: # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90 UUID=9588bfe1:ddfd5858:1067c814:ac499922 ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90 UUID=3895ca46:c1526588:d48acd7e:c153aa83 ARRAY /dev/md4 level=raid1 num-devices=2 metadata=0.90 UUID=ebd4920f:b46c1f18:2eced24a:a21ca861 ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90 UUID=048e8198:5d6d9682:d3a1e5c3:d475ad80 ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=d89ec2de:079d4be5:e00ee8f5:fcb19188 I want to carve off 500MB from md4 to make a new partition (for an AFS cache). I haven't touched mdadm or any other disk partitioning tools in years. md4 is 50GB and less than 10% used. What's the easiest way of doing this?

    Read the article

  • SSAS DMVs: useful links

    - by Davide Mauri
    From time to time happens that I need to extract metadata informations from Analysis Services DMVS in order to quickly get an overview of the entire situation and/or drill down to detail level. As a memo I post the link I use most when need to get documentation on SSAS Objects Data DMVs: SSAS: Using DMV Queries to get Cube Metadata http://bennyaustin.wordpress.com/2011/03/01/ssas-dmv-queries-cube-metadata/ SSAS DMV (Dynamic Management View) http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/ Use Dynamic Management Views (DMVs) to Monitor Analysis Services http://msdn.microsoft.com/en-us/library/hh230820.aspx

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • mix audio with h264 mp4 video with ffmpeg

    - by user2362912
    I have 2 files : Input #0, wav, from '105426_1.wav': Duration: 00:00:09.98, bitrate: 1312 kb/s Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 41000 Hz, stereo, s16, 1312 kb/s and: Duration: 00:00:41.29, start: 0.000000, bitrate: 1313 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 1211 kb/s, 24.42 fps, 25 tbr, 90k tbn, 48 tbc Metadata: handler_name : VideoHandler Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 99 kb/s Metadata: handler_name : SoundHandler I want to insert first audio file into video in special place (for example in 10 secunde of video) and mix it with audio stream of video file. I try to /usr/local/bin/ffmpeg -i 105426_1.wav -i 105426.mp4 -map 0:0 -map 1:1 -map 1:0 video_finale.mp4 but result is : Duration: 00:00:41.31, start: 0.046440, bitrate: 755 kb/s Stream #0:0(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s Metadata: handler_name : SoundHandler Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s Metadata: handler_name : SoundHandler Stream #0:2(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 640x360 [SAR 1:1 DAR 16:9], 588 kb/s, 25 fps, 25 tbr, 12800 tbn, 50 tbc Metadata: handler_name : VideoHandler I need only one audio stream and first stream play not from beginig but from 10 sec

    Read the article

  • How to deal with meta data with drop downs?

    - by Mangesh Jogade
    Please advise how to handle following scenario in web application. I have a drop-down which is populated using meta-data from table A. When form is submitted this drop down data is stored in table B. While displaying existing data, it is populated using data stored in table B. While copying existing data, it is copied using data stored in table B. I want to achieve following goals: While displaying existing data I must display data irrespective of current meta data (to explain, even if some options are removed from metadata I still display them). When I copy existing data only current data should be copied(that is if some options are removed from metadata I should not copy them). I understand that I can do this by scanning metadata every time I copy existing data, however if there are thousands of such drop down exist, it is definitely not desirable to scan complete metadata for every drop down. How can I handle such situation in web application?

    Read the article

  • javax.naming.InvalidNameException using Oracle BPM and weblogic when accessing directory

    - by alfredozn
    We are getting this exception when we start our cluster (2 managed servers, 1 admin), we have deployed only the ears corresponding to the OBPM 10.3.1 SP1 in a weblogic 10.3. When the server cluster starts, one of the managed servers (the first to start) get overloaded and ran out of connections to the directory DB because of this repeatedly error. It looks like the engine is trying to get the info from the LDAP server but I don't know why it is building a wrong query. fuego.directory.DirectoryRuntimeException: Exception [javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp']. at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:163) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:110) at fuego.directory.hybrid.ldap.Repository.selectById(Repository.java:38) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipantsInternal(MSADGroupValueProvider.java:124) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipants(MSADGroupValueProvider.java:70) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:149) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:152) at fuego.directory.hybrid.ldap.LDAPResult.getValue(LDAPResult.java:76) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.setInfo(LDAPOrganizationGroupAccessor.java:352) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:121) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:114) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.fetchGroup(LDAPOrganizationGroupAccessor.java:94) at fuego.directory.hybrid.HybridGroupAccessor.fetchGroup(HybridGroupAccessor.java:146) at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at fuego.directory.provider.DirectorySessionImpl$AccessorProxy.invoke(DirectorySessionImpl.java:756) at $Proxy66.fetchGroup(Unknown Source) at fuego.directory.DirOrganizationalGroup.fetch(DirOrganizationalGroup.java:275) at fuego.metadata.GroupManager.loadGroup(GroupManager.java:225) at fuego.metadata.GroupManager.find(GroupManager.java:57) at fuego.metadata.ParticipantManager.addNestedGroups(ParticipantManager.java:621) at fuego.metadata.ParticipantManager.buildCompleteRoleAssignments(ParticipantManager.java:527) at fuego.metadata.Participant$RoleTransitiveClousure.build(Participant.java:760) at fuego.metadata.Participant$RoleTransitiveClousure.access$100(Participant.java:692) at fuego.metadata.Participant.buildRoles(Participant.java:401) at fuego.metadata.Participant.updateMembers(Participant.java:372) at fuego.metadata.Participant.<init>(Participant.java:64) at fuego.metadata.Participant.createUncacheParticipant(Participant.java:84) at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.loadItems(JdbcProcessInstancePersMgr.java:1706) at fuego.server.persistence.Persistence.loadInstanceItems(Persistence.java:838) at fuego.server.AbstractInstanceService.readInstance(AbstractInstanceService.java:791) at fuego.ejbengine.EJBInstanceService.getLockedROImpl(EJBInstanceService.java:218) at fuego.server.AbstractInstanceService.getLockedROImpl(AbstractInstanceService.java:892) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:743) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:730) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:144) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:162) at fuego.server.AbstractInstanceService.unselectAllItems(AbstractInstanceService.java:454) at fuego.server.execution.ToDoItemUnselect.execute(ToDoItemUnselect.java:105) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startNestedTransaction(TransactionAction.java:527) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:548) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:62) at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42) at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:261) at fuego.ejbengine.ItemExecutionBean$1.execute(ItemExecutionBean.java:223) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.ejbengine.ItemExecutionBean.processMessage(ItemExecutionBean.java:209) at fuego.ejbengine.ItemExecutionBean.onMessage(ItemExecutionBean.java:120) at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466) at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371) at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327) at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547) at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233) at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709) at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114) at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2979) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794) at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826) at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321) at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248) at fuego.jndi.FaultTolerantLdapContext.search(FaultTolerantLdapContext.java:612) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:136) ... 67 more

    Read the article

  • How long do FireFox, Chrome, Safari, and Opera cache SSL/TLS session keys?

    - by MJ
    To try to use a reason SSL/TLS session key timeout on the server-side, I'd like to know how long popular browsers cache session keys on the client. Microsoft describes this information for Windows/IE here: http://technet.microsoft.com/en-us/library/cc776467(WS.10).aspx But, I haven't been able to find similar information for other popular browsers. Does anyone know? Thanks!

    Read the article

  • My iphone mobile safari does not cache , but i have a manifest file,...

    - by Ploetzeneder
    Hello, i have now put the site on: http://www.ploetzeneder.eu/Dateien/test/index4.html the manifest is there: http://www.ploetzeneder.eu/Dateien/test/app-cache-demo.manifest Why does it not work? The Webserver where the relevant problem has this url: http://www.pharao.mobi/WebAppproblem/ Username is the Username Passwort is the Password the problem is on index4.html where all images should be cached but are not

    Read the article

  • Deploying multiple Grails instances with shared cache and sessions ?

    - by Kedare
    Hello, I am looking for a solution that allows me to deploy multiple load balanced Grails instances that have shared cache (EhCache Server ?) and sessions, is this possible ? I can't find any documentation on this (connecting to a common EhCache server or using Distributed EhCache, and sharing sessions (using EhCache too ?))... I'm looking for something that will work like multiple Rails instances with a common memcached and sessions/caches stored in the memcached... Can you help me ? Thank you !

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >