Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 490/903 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Creating an OS X Mountain Lion 10.8 OpenDIrectory database with a specific domain in mind

    - by whardier
    Mountain Lion Server has been one of the most infuriating things ever recently. Above and beyond tons of crashing I've been completely unable to do the following in a way that makes sense to both me and Apple. Name your server srv01.myoffice.domain.com Create an OpenDirectory database as a Master from scratch as dc=domain,dc=com I can rename my server temporarily (taking down vital services in the process due to the automagicliciousness) and create a new profile, however when I switch back to the original domain name the default search base for server related authentication magic is now dc=srv01,dc=myoffice,dc=domain,dc=com. I've tried everything I can think of including using slapconfig backupdb/restoredb and slaving the server off of another then promoting it. This seems rather silly and Apple shows no response to many requests to resolve this. Does anybody out there have the magic to have OpenDirectory work as it should.. being able to give it any domain you want and then having all vital services operate correctly.

    Read the article

  • Windows 7 not booting after installing Ubuntu 12.10

    - by Imran Choudhry
    I have Samsung sf511 with Windows 7 Home preinstalled. I installed Ubuntu 12.10 last night. First I mounted downloaded ubuntu 12.10 iso on a DVD from Windows 7, then I restarted, changed the boot option to DVD and let the Ubuntu DVD run. I selected "Install ubuntu alongside windows 7". In the next step I allocated the diskspace . Ubuntu installed perfectly and it ranned OK. I spent a good time exploring Ubuntu. When I restarted to start my work on Windows again and selected Windows 7 to boot, it showed blue screen after booting up (after logging in). I tried restarting several times but the same blue screen keep showing at the same point of bootup. I also restored windows to an earlier time. Left with no option I booted from Samsung provided windows recovery option. The recovery process ran fine but when it restarted to boot Windows for the first time all I see is a black screen. Nothing happens. It switches between dark command promt screen to a pitch black screen as if it's rebooting after every 10 seconds or so. The laptop can be booted from the DVD Drive or USB drive but is not booting from Harddisk. What should I do in order to use both Windows 7 and Ubuntu?

    Read the article

  • Apache out of memory on

    - by Sherif Buzz
    Hi all, I have a VPS with 768 MB RAM and a 1.13 GHZ processor. I run a php/mysql dating site and the performance is excellent and server load is generally very low. Sometimes I place ads on Facebook and at peak times I can get 100-150 clicks within a few seconds - this causes the server to run out of memory : Cannot allocate memory: couldn't create child process: /opt/suphp/sbin/suphp .... And all users receive an error 500 page. I am just wondering if this sounds reasonable or not - to me 100-150 does not seem to be a number that should cause apache to run out of memory. Any advice/recommendations how to diagnose the issue highly appreciated.

    Read the article

  • BIP Debugging to a file

    - by Tim Dexter
    If you use the standalone server or with OBIEE and use OC4J as the web server. Have you ever taken a looksee at the console window (doc/xterm) that you use to start it. Ever turned on debugging to see masses of info flow by that window and want to capture it all? I have been debugging today and watched all that info fly by and on Windoze gets lost before you can see it! The BIP developers use the System.out.println() and System.err.println()methods in the BIP applications to generate debugging formation. Normally the output from these method calls go to the console where the OC4J process is started. However you can specify command line options when starting OC4J to direct the stdout and stderr output directly to files. The ?out and ?err parameters tell OC4J which file to direct the output to. All you need do is modify the oc4j.cmd file used to start BIP. I didnt get fancy and just plugged in the following to the file under the start section. I just modified the line: set CMDARGS=-config "%SERVER_XML%" -userThreads to set CMDARGS=-config "%SERVER_XML%" -out D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.out -err D:\BI\OracleBI\oc4j_bi\j2ee\home\log\oc4j.err -userThreads Bounced the server and I now have a ballooning pair of debug files that I can pour over to my hearts content. The .out file appears to contain BIP only log info and the .err file, OBIEE messages. If you are using another web server to host BIP, just check out the user docs to find out how to get the log files to write. Note to self, remember to turn off the debug when Im done!

    Read the article

  • Java Spotlight Episode 86: Tony Printezis on Garbage Collection First

    - by Roger Brinkley
    Interview with Tony Printezis on Garbage Collection First (GC1). Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News JSR 358: A major revison of the Java Community Process - JCP 3.Next JAX-RS 2.0 Early Draft- Third Edition Events June 11-14, Cloud Computing Expo, New York City June 12, Boulder JUG June 13, Denver JUG June 13, Eclipse Juno DemoCamp, Redwoood Shore June 13, JUG Münster June 14, Java Klassentreffen, Vienna, Austria June 18-20, QCon, New York City June 19, CJUG, Chicago June 20, 1871, Chicago June 26-28, Jazoon, Zurich, Switzerland Jun 27, Houston JUG ?? July 5, Java Forum, Stuttgart, Germany Jul 13-14, IndicThreads, Delhi July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewTony Printezis is a Principal Member of Technical Staff at Oracle, based in Burlington, MA. He has been contributing to the Java HotSpot Virtual Machine since 2006. He spends most of his time working on dynamic memory management for the Java platform, concentrating on performance, scalability, responsiveness, parallelism, and visualization of garbage collectors. He obtained a Ph.D. in 2000 and a BSc (Hons) in 1995, both from the University of Glasgow in Scotland. In addition, he is a JavaOne Rock Star, a title awarded for his highly rated JavaOne session on GC. Mail Bag What’s Cool JavaOne content selection is complete. Notifications done.

    Read the article

  • Bluehost: 1 Minute Delays?

    - by feklee
    On Bluehost shared hosting (Apache 2.2 + FastCGI + APC), I have the problem that some requests take almost exactly one minute to respond. Yet time spent in PHP is only two seconds. To demonstrate the issue, I created a temporary test page. Sample output: When asking Bluehost support about the issue, I got the following reply: “the fastcgi process don't stay running they will only stay running for a certan period which would explain the timeouts you are seeing it traffic would spawn new ones. [...]” I understand that spawning new FastCGI processes takes some time. But almost exactly one minute? That must be some timeout. But which timeout may that be? What I want in the end: No request should take longer than five seconds to respond, even if it fails. When I asked Bluehost support to set the Apache TimeOut directive accordingly, they told me: “we do not modify the Apache Config File even on a virtual host level.”

    Read the article

  • Quick way to convert Excel sheet to XML

    - by nute
    How do you easily convert an excel file into a XML file? When trying to save as an XML File, it complains that the file does not have an XML mapping. Clicking help brings up pretty complicated stuff about XML Mapping file, XLD and some other acronyms. Why is it so complicated? Lately I've realized that tab delimited, CSV and others are prone to formating issues (comas in a field, new lines, quotes, ...). So I think that XML is a better way to process excel data. Please advise. Maybe a freeway tool?

    Read the article

  • Backup strategy for Windows Server 2008 R2 and Hyper-V

    - by winserveradmin
    I am in the process of planning a Hyper-V deployment with SCVMM 2008 R2. I have several VMs on VMWare Player (temporary solution) for stuff like Sharepoint 2007, 2010, and a couple of other server apps. I want to develop a resilient backup plan for this. On the software side, Data Protection Manager 2010 will support all of the server apps I run (sQL Server 2008 R2, Sharepoint, Exchange, etc). But on the hardware side (storage), what is the best way to go? Drobo seems to have issues with Hyper-V juding by a few threads on here and doesn't support DPM 2010 (which is in beta anyway) but not even the 2007 version (see http://www.drobo.com/support/best_practices.php). What storage device would work well? Do I need a home server or just an external usb drive? Capacity wise, 2tb will probably be best so I can have a small archive and implement a round-robin system. Thanks

    Read the article

  • Can Windows-Security-SPP block execution of .exe?

    - by Kirk Marple
    We're seeing a strange situation, where some executables won't run from a Windows command prompt (running as admin). Just running the command (say, filename.exe) gives no response on the console. No errors, no output, nothing. If we copy over the same Windows .exe from a different folder, it "magically" starts working, and we see the default console output. (Happens both on Win7 x64, and Win2008R2 x64. Application is running as 32-bit process.) At the time when it accesses the .exe, I can see events in the application and system logs regarding Windows-Security-SPP, and it makes me believe that the .exe is being blocked from execution. Does this sound familiar?

    Read the article

  • apache performance improvements and maxclients

    - by updog
    I know this has been asked a few (thousand) times around the internet but I was hoping someone whose in the know might be able to comment on my particular setup. I have a web server hosting one site (php/codeigniter) with a wordpress blog in a sub directory. The server has 2GB RAM, 3GHz CPU and I have offloaded the static assets to CloudFlare which is has reduced bandwidth for the actual server by almost 75%. The problem I have is when an email campaign is sent out that links to the site or blog, it slows down. Below is my settings in apache2.conf. Average apache process size is 80M and there is 1.5GB available for apache. <IfModule mpm_prefork_module> StartServers 8 MinSpareServers 5 MaxSpareServers 20 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> I have already setup and installed apc and built some caching into the site and used w3totalcache on the blog. The number of concurrent users is around 2-300 when there is a campaign, are there any further optimisations before

    Read the article

  • Event Tracing for Windows GUI

    - by Ian Boyd
    i want to view tracing events from the Event Tracing for Windows system. As far as i can tell the only client program that exists for connecting to providers is a command line tool that comes with the Microsoft Windows Device Driver Development Kit (DDK), e.g.: tracelog -start "NT Kernel Logger" -f krnl.etl -dpcisr -nodisk -nonet -b 1024 -min 4 -max 16 -ft 10 –UsePerfCounter ... tracelog –stop It then requires a separate command line tool to convert the generated log file into something usable, e.g.: trcerpt krnl.etl -report isrdpc.xml Has nobody come up with a Windows program (ala Performance Monitor, Process Monitor, Event Viewer) that lets me start tracing by pushing a "Go" button, let me see events, and i can stop it with a "Stop" button? Is there GUI for Event Tracing for Windows?

    Read the article

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways(not just CLI commands per se, any program or theoretical background would do) to look at it.

    Read the article

  • Yum error when updating / install

    - by acctman
    Yum error are the RHN servers down or is there a problem on my server. yum update Loaded plugins: rhnplugin, security There was an error communicating with RHN. RHN support will be disabled. Error communicating with server. The message was: Error Message: RHN Proxy could not successfully connect its RHN parent. Please contact your system administrator. Error Class Code: 1000 Error Class Info: RHN Proxy error. Explanation: An error has occurred while processing your request. If this problem persists please enter a bug report at bugzilla.redhat.com. If you choose to submit the bug report, please be sure to include details of what you were trying to do when this error occurred and details on how to reproduce this problem. Excluding Packages in global exclude list Finished Skipping security plugin, no data Setting up Update Process No Packages marked for Update

    Read the article

  • Bugzilla - How to setup MTA that will receive Gmail to create bugs

    - by JRock
    I have been looking for a while on setting up an MTA for bugzilla to receive bugs via email and am not really seeing any detailed guides. Currently I am using gmail as the outbound smtp for messages, but I do not have a solution for the receiving of emails as bugs. I am assuming I would setup an MTA and it would grab down the emails and then bugzilla would read them somehow. I am unsure of this process/a solution for this; Any detailed help or direction would be great. Distro: Ubuntu 11.10

    Read the article

  • Oracle’s FY14 Partner Kickoff Recap & New OPN Website

    - by Kristin Rose
    There is no doubt that we are off to a strong FY14! Now that Oracle’s Global Partner Kickoff has come and gone, it’s time to take what we have learned and focus on having the strongest year ever! To quote Oracle pilot, Sean D. Tucker, “FY14, it’s all about growth baby!” Here are some of the ways you can grow with Oracle! Sell into accounts where Oracle isn’t selling directly Offer customer added value solutions leveraging our technology Offer deep market capabilities that leverage transformative technology Be aggressive, sell the entire stack, engage with Oracle in the marketplace and get engineered for growth! With this being said, we also know that to have the strongest year ever, you also need the strongest tools ever! Ladies and gentleman, in case you missed its debut during Oracle’s Global Partner Kickoff, let OPN introduce you to the newly redesigned, Oracle PartnerNetwork website, providing  easy access to key business processes, systems and resources! We took your advice and implemented the following enhancements: A new OPN home page, highlighting paths to top tasks Streamlined top navigation New business process focused pages Restructured Knowledge Zone areas (currently applied to select pages) Learn more about the new Oracle PartnerNetwork website and all that Oracle has to offer, by watching the FY14 Global Partner Kickoff replay video below! Thank you for your hard work and partnership in FY13, here’s to an even stronger FY14! Good Selling, The OPN Communications Team

    Read the article

  • LINKED TABLES BETWEEN MS ACCESS 2003 AND MS ACCESS 2007-WRITE PERMISSIONS DENIED

    - by STEVE KING
    We are in the process of switching over to ACCESS 2007. We have numerous data tables in ACCESS 2003 files. In one case, the user has 2007 on his PC and opend the front end in 2007. No problems. When the the user is done, he clicks a button that executes a macro full of update queries. The macro reaches the first query and halts. We get a messge saying we do not have permisons to write to this linked table (2003 format). There were no security files involved. We re-linked from 2007, same problem. LAN permssions were ok. I wound up having to import the tables to front end in order for the user to be able to do his job.

    Read the article

  • Windows Azure Evolution &ndash; TFS Integration (WAWS Part 2)

    - by Shaun
    So this is the fourth blog post about the new features of Windows Azure and the second part of Windows Azure Web Sites. But this is not just focus on the WAWS since the function I’m going to introduce is available in both Windows Azure Web Sites and Windows Azure Cloud Service (a.k.a. hosted service). In the previous post I talked about the Windows Azure Web Sites and how to use its gallery to build a WordPress personal blog without coding. Besides the gallery we can create an empty web site and upload our website from vary approaches. And one of the highlighted feature here is that, we can make our web site integrated with a source control service, such as TFS and Git, so that it will be deployed automatically once a new commit or build available.   Create New Empty Web Site In the developer portal when creating a new web site, we can select QUICK CREATE item. This will create an empty web site with only one shared instance without any database associated. Let’s specify the URL, region and subscription and click OK. After a few seconds our website will be ready. And now we can click the BROWSE button to open this empty website. As you can see there is a welcome page available in my website even thought I didn’t upload or deploy anything. This means even though the website will be charged even before anything was deployed, similar as the cloud service (hosted service). It is because once we created a website, Windows Azure platform had arranged a hosting process (w3wp.exe) in the group of virtual machines.   Create Project in TFS Preview Service and Setup Link Currently the Windows Azure Web Sites can integrate with TFS and Git as its deployment source, and it only support the Microsoft TFS Preview Service for now. I will not deep into how to use the TFS preview service in this post but once we click into the website we had just created and then clicked the “Set up TFS publishing”, there will be a dialog helping us to connect to this service. If you don’t have an account you can click the link shown below to request one. Assuming we have already had an account of TFS service then we need to create a new project firstly. Go to your TFS service website and create a new project, giving the project name, description and the process template. Then, back to the developer portal and clicked the “Set up TFS publishing” link. In the popping up window I will provide my TFS service URL and click the “Authorize now” link. Click “Accept” button to allow my windows azure to connect to my TFS service. Then it will be back to the developer portal and list all projects in my account. Just select the one I had just created and click OK. Then our website is linking to the TFS project I specified and finally it will show similar like this below. This means the web site had been linked to the TFS successfully.   Work with TFS Preview Service in VS2010 In the figure above there are some links to guide us how to connect to the TFS server through Visual Studio 2010 and 2012 RC. If you are using Visual Studio 2012 RC, you don’t need any extension. But if you are using Visual Studio 2010 you must have SP1 and KB2581206 installed. To connect to my TFS service just open the Visual Studio and in the Team Explorer, we can add a new TFS server and paste the URL of my TFS service from the developer portal. And select the project I had just created, then it will be listed in my Team Explorer. Now let’s start to build our website. Since the website we are going to build will be deployed to WAWS, it’s NOT a cloud service, NOT a web role. So in this case we need to create a normal ASP.NET web application. For example, an ASP.NET MVC 3 web application. Next, right click on the solution and select “Add Solution to Source Control”, select the project I had just created. Then check my code in. Once the check-in finished we can see that there is a build running in the TFS server. And if we back to the developer portal, we will see in our web site deployment page there’s a deployment running. In fact, once we linked our web site to our TFS then it will create a new build definition in our TFS project. It will be triggered by each check-in and deploy to the web site we linked automatically. So that when our code had been compiled it will be published to our web site from our TFS server. Once the build and deployment finished we can see it’s now active on our developer portal. Now we can see the web site that created from my Visual Studio and deployed by my TFS.   Continue Deployment through VS and TFS A big benefit when using TFS publishing is the continue deployment. Now if I changed some code in my Visual Studio, for example update some text on the home page and check in my changes, then it will trigger an new build and deploy to my WAWS automatically. And even more, if we wanted to rollback to a previous version we can just select an existing deployment listed in the portal and click REDEPLOY at the bottom.   Q&A: Can Web Site use Storage work with a Worker Role? Stacy asked a question in my previous post, which was “can a web site use Windows Azure Storage and furthermore working with a worker role”. Since the web site is deployed on the windows azure virtual machines in data center, it must be able to use all windows azure features such as the storage, SQL databases, CDN, etc.. But since when using web site we normally have a standard ASP.NET web application, PHP website or NodeJS, the windows azure SDK was not referenced by default. But we can add them by ourselves. In our sample project let’s right click on my MVC project and clicked the “Manage NuGet packages”. And in the dialog I will search windows azure packages and select the “Windows Azure Storage” to install. Then we will have the assemblies to access windows azure storage such as tables, queues and blobs. Since I have a storage account already, let’s have a quick demo, just to list all blobs in a container. The code would be like this. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6: using Microsoft.WindowsAzure; 7: using Microsoft.WindowsAzure.StorageClient; 8:  9: namespace WAASTFSDemo.Controllers 10: { 11: public class HomeController : Controller 12: { 13: public ActionResult Index() 14: { 15: ViewBag.Message = "Welcome to Windows Azure!"; 16:  17: var credentials = new StorageCredentialsAccountAndKey("[STORAGE_ACCOUNT]", "[STORAGE_KEY]"); 18: var account = new CloudStorageAccount(credentials, false); 19: var client = account.CreateCloudBlobClient(); 20: var container = client.GetContainerReference("shared"); 21: ViewBag.Blobs = container.ListBlobs().Select(b => b.Uri.AbsoluteUri); 22:  23: return View(); 24: } 25:  26: public ActionResult About() 27: { 28: return View(); 29: } 30: } 31: } 1: @{ 2: ViewBag.Title = "Home Page"; 3: } 4:  5: <h2>@ViewBag.Message</h2> 6: <p> 7: To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>. 8: </p> 9: <div> 10: <ul> 11: @foreach (var blob in ViewBag.Blobs) 12: { 13: <li>@blob</li> 14: } 15: </ul> 16: </div> And then just check in the code, it will be deployed to my web site. Finally we can see the blobs in my storage.   This is just an example but it proves that web sites can connect to storage, table, blob and queue as well. So the answer to Stacy should be “yes”. The web site can use queue storage to work with worker role.   Summary In this post I demonstrated how to integrate with TFS from Windows Azure Web Sites. You can see our website can be built, uploaded and deployed automatically by TFS service. All we need to do is to provide the TFS name and select the project. Not only the Windows Azure Web Site, in this upgrade the Windows Azure Cloud Services (hosted service) can be published through TFS as well. Very similar as what we have shown below. But currently, only Microsoft TFS Service Preview can be integrated with Windows Azure. But I think in the future we can link the TFS in our enterprise and some 3rd party TFS such as CodePlex to Windows Azure.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • Bit by bit comparison of using Java or Python for unit testing frameworks and Selenium

    - by Anirudh
    Currently we are in the process of finalizing which language out of Java, Python should be used for Automation using selenium webdriver and a suitable unit testing frameworks. I have made use of Junit, TestNG and webdriver while using with Java and have designed frameworks without much fuss before. I am new to python though I came across pyhton's unit testing frameworks like unittest, pyunit, nose e.t.c but I have doubts if they would be as successful as testNG or Java. I would like to analyze point by point when used with selenium webdriver as below: 1)I have read that as Python is an interpreted language hence it's execution is slower, so say if I have to run 1000 test cases which take about 6 hours to run in Java, would python take considerably longer time for the same test cases like 8 hours? 2)Can the Python unit testing framework be as flexible as a Java unit testing framework like testNG in terms or Grouping the tests, parallel execution, skipping test. e.t.c 3)Also one point that I think of is that Python with selenium webdriver doeasn't have as big or learned community as we have for Java with webdriver, say if I run into trouble with something I am more likely to find an answer for Java as compared to python? 4)Somewhat related to point 3, is it safe to rely on tools, plugins or even webderiver's python's binding as a continuously well maintained? 5)One major drawback as I see while using python's unit testing framework is lack of boilerplate code or libraries for nicely illustrative HTML reports preferably historical reports with Pie charts, bar graphs and timelines as we have in case of Java like Allure, TestNG's default reports, reportNG or Junit reports with the help of ANT as shown below Allure Reports Junit Historical reports Also I would like to emphasize on the fact if there is a way for one to write the framework in java and make libraries or utilities according to out application in webdriver which can easily be called or integrated in with python code or modules? That would actually solve the problem for us as the client would be able to use the code we write in Java and make use of the same or call it from their python modules?

    Read the article

  • How can I expire non-active sessions on my Netscreen SSG140?

    - by David Mackintosh
    I have a Juniper Netscreen SSG-140. While experimenting with a VoIP service, I defined a custom policy that was to be used to permit the possible ports in use to be sent back to the VoIP server from systems connecting across the internet. Because I'd had problems in the past with VoIP systems getting broken when their UDP sessions were expired out faster than their keep-alives were generated, I set the timeout on this custom service to be 'never'. After much experimentation, I happened to notice that my session count on the firewall has grown from a couple thousand to over 36000. After discussion with the VoIP "expert", I set the timeout to be 30 minutes; however, all the sessions set up during the experimentation process are still there, more than 3 days later. Is there a way I can force these old sessions to get expired and removed from the session table, or am I looking at resetting my firewall? (Both firewalls, actually -- they are in a cluster.)

    Read the article

  • Trouble downloading a file due to lack of permission

    - by user40495
    After downloading this file: http://dl.dropbox.com/u/315/hon/hon_update.exe before it actually copies to my computer I get this error: File Access Denied <title of box> You need permission to perform this action. You require permission from the computer's administrator to make changes to this file. Within this box are to buttons: Try Again or Cancel I have searched other similar "permission" type questions, but they seem to all refer to existing files or folders. How do I give permission to a file which doesn't exist yet? (since it is in the process of downloading and hasn't finally copied over yet) I am running Windows 7 64 bit. I would be happy to supply any other information to help clarify.

    Read the article

  • Growing Into Enterprise Architecture

    - by pat.shepherd
    I am writing this post as I am in an Enterprise Architecture class, specifically on the Oracle Enterprise Architecture Framework (OEAF).  I have been a long believer that SOA’s key strength is that it is the first IT approach that blends or unifies business and technology.  That is a common view and is certainly valid but is not completely true (or at least accurate).  As my personal view of EA is growing, I realize more than ever that doing EA is FAR MORE than creating a reference architecture, creating a physical architecture or picking a technology to standardize on.  Those are parts of the puzzle but not the whole puzzle by any stretch. I am now a firm believer that the various EA frameworks out there provide the rigor and structure required to allow the bridging of business strategy / vision to IT strategy / vision. The flow goes something like this: Business Strategy –> Business / Application / Information / Technology Architecture –> SOA Reference Architecture –> SOA Functional Architecture.  Governance is imbued throughout to help map, measure and verify the business-to-IT coherence. With those in place, then (and only then) can SOA fulfill it’s potential to be more that an integration strategy, more than a reuse strategy; but also a foundation for tying the results of IT to business vision. Fortunately, EA is a an ongoing process that it is never too late to get started with an understanding of frameworks such as TOGAF, FEA, or OEAF.  Also, EA is never ending in that it always needs to be apply, even once a full-blown Enterprise Architecture is established it needs to be constantly evolved.  For those who are getting deeper into EA as a discipline, there is plenty runway to grow as your company/customer begins to look more seriously at EA. I will close with a pointer to a Great Book I have recently read on this subject: Enterprise Architecture as Strategy (http://www.amazon.com/Enterprise-Architecture-Strategy-Foundation-Execution/dp/1591398398/ref=sr_1_1?ie=UTF8&s=books&qid=1268842865&sr=1-1)

    Read the article

  • Dovecot throws obsolete warnings, even though dovecot.conf updated on Ubuntu 11

    - by John Bowlinger
    In trying to set up SASL for dovecot on Ubuntu 11, I keep getting obsolete warnings in my log: Sep 10 15:33:53 server1 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:24: passdb {} has been replaced by passdb { driver= } Sep 10 15:33:53 server1 dovecot: config: Warning: Obsolete setting in /etc/dovecot/dovecot.conf:27: userdb {} has been replaced by userdb { driver= } Even though my dovecot.conf file looks like this: protocols = none auth default { mechanisms = plain login passdb { driver=pam } userdb { driver=passwd } socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } } } Even when I try: driver=etc/pam.d/dovecot driver=etc/passwd I still get the same error. Looking at the example config file: cat /usr/share/doc/dovecot-common/dovecot/example-config/dovecot.conf was of no help. Dovecot is running: ps -A | grep 'dovecot' 9663 ? 00:00:00 dovecot But I can't seem to get that elusive "dovecot-auth" process. Anyone know what's going on?

    Read the article

  • Add server 2008 to 2003 domain schema upgrade failed

    - by Ken
    I'm trying to add a server 2008 r2 server to an existing 2003 domain (upgraded to 2003 functionality). I've followed the steps from microsoft which are clarified by this post: 2003 DC AD upgrade to 2008 on second server migration plan While running adprep /forestprep I lost my connection and wasn't able to resume or remote control that session, so I couldn't see the end result of the command. Rerunning adprep /forestprep indicates that the process has already been completed successfully. After finishing the rest of the steps (/domainprep ... and /gpprep, etc), the 2008 server won't join. The error message is the same "you need to run forestprep first" So the situation I'm in is that I can't rerun /forestprep, but my Registry key still reads schemaVer=30. Should I have staged forest upgrades? Any ideas how to get my schema ver to 44 at this point?

    Read the article

  • Is there a standard Mac OS X Snow Leopard home folder naming convention?

    - by JGFMK
    I recently re-installed a fresh copy of Snow Leopard on my Mac. I was a bit surprised when the process completed it had changed the name of my home folder to use my name rather than my initials. This messed up all the paths in my Eclipse IDE workspace. I vaguely remember being asked if I use it for home or business use and wondered if something I did there has a bearing on the way the default name is assigned. Does anyone have more information on this? Links too official Apple install etc. Cheers.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >