Search Results

Search found 12919 results on 517 pages for 'tool pack'.

Page 309/517 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • JavaOne Session Report: “50 Tips in 50 Minutes for GlassFish Fans”

    - by Janice J. Heiss
    At JavaOne 2012 on Monday, Oracle’s Engineer Chris Kasso, and Technology Evangelist Arun Gupta, presented a head-spinning session (CON4701) in which they offered 50 tips for GlassFish fans. Kasso and Gupta alternated back and forth with each presenting 10 tips at a time. An audience of about (appropriately) 50 attentive and appreciative developers was on hand in what has to be one of the most information-packed sessions ever at JavaOne!Aside: I experienced one of the quiet joys of JavaOne when, just before the session began, I spotted Java Champion and JavaOne Rock Star Adam Bien sitting nearby – Adam is someone I have been fortunate to know for many years.GlassFish is a freely available, commercially supported Java EE reference implementation. The session prioritized quantity of tips over depth of information and offered tips that are intended for both seasoned and new users, that are meant to increase the range of functional options available to GlassFish users. The focus was on lesser-known dimensions of GlassFish. Attendees were encouraged to pursue tips that contained new information for them. All 50 tips can be accessed here.Below are several examples of more elaborate tips and a final practical tip on how to get in touch with these folks. Tip #1: Using the login Command * To execute a remote command with asadmin you must provide the admin's user name and password.* The login command allows you to store the login credentials to be reused in subsequent commands.* Can be logged into multiple servers (distinguish by host and port). Example:     % asadmin --host ouch login     Enter admin user name [default: admin]>     Enter admin password>     Login information relevant to admin user name [admin]     for host [ouch] and admin port [4848] stored at     [/Users/ckasso/.asadminpass] successfully.     Make sure that this file remains protected.     Information stored in this file will be used by     asadmin commands to manage the associated domain.     Command login executed successfully.     % asadmin --host ouch list-clusters     c1 not running     Command list-clusters executed successfully.Tip #4: Using the AS_DEBUG Env Variable* Environment variable to control client side debug output* Exposes: command processing info URL used to access the command:                           http://localhost:4848/__asadmin/uptime Raw response from the server Example:   % export AS_DEBUG=true  % asadmin uptime  CLASSPATH= ./../glassfish/modules/admin-cli.jar  Commands: [uptime]  asadmin extension directory: /work/gf-3.1.2/glassfish3/glassfish/lib/asadm      ------- RAW RESPONSE  ---------   Signature-Version: 1.0   message: Up 7 mins 10 secs   milliseconds_value: 430194   keys: milliseconds   milliseconds_name: milliseconds   use-main-children-attribute: false   exit-code: SUCCESS  ------- RAW RESPONSE  ---------Tip #11: Using Password Aliases * Some resources require a password to access (e.g. DB, JMS, etc.).* The resource connector is defined in the domain.xml.Example:Suppose the DB resource you wish to access requires an entry like this in the domain.xml:     <property name="password" value="secretp@ssword"/>But company policies do not allow you to store the password in the clear.* Use password aliases to avoid storing the password in the domain.xml* Create a password alias:     % asadmin create-password-alias DB_pw_alias     Enter the alias password>     Enter the alias password again>     Command create-password-alias executed successfully.* The password is stored in domain's encrypted keystore.* Now update the password value in the domain.xml:     <property name="password" value="${ALIAS=DB_pw_alias}"/>Tip #21: How to Start GlassFish as a Service * Configuring a server to automatically start at boot can be tedious.* Each platform does it differently.* The create-service command makes this easy.   Windows: creates a Windows service Linux: /etc/init.d script Solaris: Service Management Facility (SMF) service * Must execute create-service with admin privileges.* Can be used for the DAS or instances* Try it first with the --dry-run option.* There is a (unsupported) _delete-serverExample:     # asadmin create-service domain1     The Service was created successfully. Here are the details:     Name of the service:application/GlassFish/domain1     Type of the service:Domain     Configuration location of the service:/work/gf-3.1.2.2/glassfish3/glassfish/domains     Manifest file location on the system:/var/svc/manifest/application/GlassFish/domain1_work_gf-3.1.2.2_glassfish3_glassfish_domains/Domain-service-smf.xml.     You have created the service but you need to start it yourself. Here are the most typical Solaris commands of interest:     * /usr/bin/svcs  -a | grep domain1  // status     * /usr/sbin/svcadm enable domain1 // start     * /usr/sbin/svcadm disable domain1 // stop     * /usr/sbin/svccfg delete domain1 // uninstallTip #34: Posting a Command via REST* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.* Use wget/curl to execute commands on the DAS.Example:  Deploying an application   % curl -s -S \       -H 'Accept: application/json' -X POST \       -H 'X-Requested-By: anyvalue' \       -F id=@/path/to/application.war \       -F force=true http://localhost:4848/management/domain/applications/application* Use @ before a file name to tell curl to send the file's contents.* The force option tells GlassFish to force the deployment in case the application is already deployed.Tip #46: Upgrading to a Newer Version * Upgrade applications and configuration from an earlier version* Upgrade Tool: Side-by-side upgrade– GUI: asupgrade– CLI: asupgrade --c– What happens ?* Copies older source domain -> target domain directory* asadmin start-domain --upgrade* Update Tool and pkg: In-place upgrade– GUI: updatetool, install all Available Updates– CLI: pkg image-update– Upgrade the domain* asadmin start-domain --upgradeTip #50: How to reach us?* GlassFish Forum: http://www.java.net/forums/glassfish/glassfish* [email protected]* @glassfish* facebook.com/glassfish* youtube.com/GlassFishVideos* blogs.oracle.com/theaquariumArun Gupta acknowledged that their method of presentation was experimental and actively solicited feedback about the session. The best way to reach them is on the GlassFish user forum.In addition, check out Gupta’s new book Java EE 6 Pocket Guide.

    Read the article

  • Marshalling C# Structs into DX11 cbuffers

    - by Craig
    I'm having some issues with the packing of my structure in C# and passing them through to cbuffers I have registered in HLSL. When I pack my struct in one manner the information seems to be able to pass to the shader: [StructLayout(LayoutKind.Explicit, Size = 16)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; [FieldOffset(12)] public int type; } This works perfectly when used against this HLSL fragment: cbuffer PerFrame : register(b0) { Vector3 eyePos; int type; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } However, when I use the following structure definitions... // Note this is 16 because HLSL packs in 4 float 'chunks'. // It is also simplified, but still demonstrates the problem. [StructLayout(Layout.Explicit, Size = 16)] internal struct InternalTestStruct { [FieldOffset(0)] public int type; } [StructLayout(LayoutKind.Explicit, Size = 32)] internal struct TestStruct { [FieldOffset(0)] public Vector3 mEyePosition; //Missing 4 bytes here for correct packing. [FieldOffset(16)] public InternalTestStruct mInternal; } ... the following HLSL fragment no longer works. struct InternalType { int type; } cbuffer PerFrame : register(b0) { Vector3 eyePos; InternalType internalStruct; } float3 GetColour() { float3 returnColour = float(0.0f, 0.0f, 0.0f); switch(internaltype.type) { case 0: returnColour = float3(1.0f, 0.0f, 0.0f); break; case 1: returnColour = float3(0.0f, 1.0f, 0.0f); break; case 2: returnColour = float3(0.0f, 0.0f, 1.0f); break; } return returnColour; } Is there a problem with the way I am packing the struct, or is it another issue? To re-iterate: I can pass a struct in a cbuffer so long as it does not contain a nested struct.

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • Embracing Imperfection

    - by Johnm
    The pursuit of perfection is a road on which we often find ourselves traveling. It is an unpaved road filed with pot-holes and ruts that often destroy our stride. The shoulders of this road are lined with the bones and rotting carcasses of well planned projects, solutions and dreams of others who have dared the journey. Often the choice to engage in this travel is a compulsive one. We can't help but to pack our bags and make the trip. We justify it by equating it to the delivery of a quality product or service. We use our past travels as validation of our worthiness and value. Our shared experience, as tortured pilgrims of perfection, reveals that each odyssey that bewitched us resulted in a stark reminder of the very weaknesses and fears that we were attempting to mollify. The voice of the critic that berated us for the lack of craftsmanship was our own. Although, at the end of the journey our own critical voice was joined by the gnashing of teeth of those who could not reap the fruit of your labor due to its lack of timely delivery. There is another road in which to travel. It is the pursuit of embracing imperfection. The cost of traveling this route is your contribution to its eternal construction. Each segment is designed uniquely. At times it has the appearance of a patchwork quilt; while other times it is well organized and highly measured. In all cases, its construction has continually advanced and been utilized as each segment was delivered by its architect. Those who choose to select this spindle of these crossroads crack open the shells of their fears to reveal the vapor that is within. They construct their houses upon these shells. Through their hunger for mastery they wring every drop of nectar from failure and discard its husks to the ditches of this road. Through their efforts the thoroughfare begins to develop a personality of its own, a beautifully human one, rich with the strengths and weaknesses of all of its contributors. Like many of us, the pursuit of perfection has not served me well. In fact, I would say that it has been more damaging than it has been helpful. While the perfectionist in me occasionally makes its presence known, I consider myself a "recovering perfectionist". It is evident to me that there is immense beauty found in imperfection. I choose to embrace it. It is grounding. It is constructive. It is honest.

    Read the article

  • Best Advice Ever: Learn By Helping Others

    - by Argenis
    I remember when back in 2001 my friend and former SQL Server MVP Carlos Eduardo Rojas was busy earning his MVP street-cred in the NNTP forums, aka Newsgroups. I always thought he was playing the Sheriff trying to put some order in a Wild Wild West town by trying to understand what these people were asking. He spent a lot of time doing this stuff – and I thought it was just plain crazy. After all, he was doing it for free. What was he gaining from all of that work? It was not until the advent of Twitter and #SQLHelp that I realized the real gain behind helping others. Forget about the glory and the laurels of others thanking you (and thinking you’re the best thing ever – ha!), or whatever award with whatever three letter acronym might be given to you. It’s about what you learn in the process of helping others. See, when you teach something, it’s usually at a fixed date and time, and on a specific topic. But helping others with their issues or general questions is something that goes on 24x7, on whatever topic under the sun. Just go look at sites like DBA.StackExchange.com, or the SQLServerCentral forums. It’s questions coming in literally non-stop from all corners or the world. And yet a lot of people are willing to help you, regardless of who you are, where you come from, or what time of day it is. And in my case, this process of helping others usually leads to me learning something new. Especially in those cases where the question isn’t really something I’m good at. The delicate part comes when you’re ready to give an answer, but you’re not sure. Often times I’ll try to validate with Internet searches and what have you. Often times I’ll throw in a question mark at the end of the answer, so as not to look authoritative, but rather suggestive. But as time passes by, you get more and more comfortable with that topic. And that’s the real gain.  I have done this for many years now on #SQLHelp, which is my preferred vehicle for providing assistance. I cannot tell you how much I’ve learned from it. By helping others, by watching others help. It’s all knowledge and experience you gain…and you might not be getting all that in your day job today. Such thing, my dear reader, is invaluable. It’s what will differentiate yours amongst a pack of resumes. It’s what will get you places. Take it from me - a guy who, like you, knew nothing about SQL Server.

    Read the article

  • Java Cloud Service for developers

    - by JuergenKress
    The advent of cloud computing has reinvented application development for many companies. “That’s the beauty of the cloud,” says Cameron Purdy, vice president of development, Oracle. “It dramatically improves developer productivity because they can do what they do best without having to manage complex development, testing, staging, and production environments.” The key is to find a platform that doesn’t impose proprietary restrictions or force developers to learn new tools. For example, Oracle Java Cloud Service is an enterprise-grade platform as a service for building and deploying Java EE, Oracle WebLogic Server, and Oracle Application Development Framework (Oracle ADF) applications. “It’s designed to be flexible and easy to use,” says Purdy. “And it is also a standards-based solution -it’s not proprietary and there is no cloud lock-in. Developers get instant access to an enterprise-grade environment for a simple, monthly subscription.” Oracle Java Cloud Service instances are created with just a few clicks, so businesses can create a rich application development environment within minutes. Running on Oracle WebLogic Server and Oracle Exalogic, the underlying infrastructure also leverages Oracle Fusion Middleware’s integration with common services. For example, instances come integrated and preconfigured with optimized Oracle Database and Oracle Identity Management configurations. Based on Oracle Enterprise Manager, the Oracle Java Cloud Service console lets customers easily manage and monitor their Oracle Java Cloud Service instances. The open nature of the Oracle Java Cloud Service lets developers integrate through Web services such as SOAP and REST APIs, as well as use their favorite developer tools, whether they are out-of-the-box tools such as Maven and Ant or the productivity features built into Oracle JDeveloper, Oracle Enterprise Pack for Eclipse, or NetBeans IDE. The service allows for the seamless movement of applications between on-premise Oracle WebLogic Server domains and instances of Oracle Java Cloud Service within Oracle Cloud. This approach allows flexibility to mix and match the use of on-premise environments with cloud instances for development, test, and production environments. Visit to learn more and watch videos about Oracle Java Cloud Service. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: java,cloud,oracle cloud,java cloud,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Different Not Automatically Implies Better

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/11/05/154556.aspxRecently I was digging deeper why some WCF hosted workflow application did consume quite a lot of memory although it did basically only load a xaml workflow. The first tool of choice is Process Explorer or even better Process Hacker (has more options and the best feature copy&paste does work). The three most important numbers of a process with regards to memory are Working Set, Private Working Set and Private Bytes. Working set is the currently consumed physical memory (parts can be shared between processes e.g. loaded dlls which are read only) Private Working Set is the physical memory needed by this process which is not shareable Private Bytes is the number of non shareable which is only visible in the current process (e.g. all new, malloc, VirtualAlloc calls do create private bytes) When you have a bigger workflow it can consume under 64 bit easily 500MB for a 1-2 MB xaml file. This does not look very scalable. Under 64 bit the issue is excessive private bytes consumption and not the managed heap. The picture is quite different for 32 bit which looks a bit strange but it seems that the hosted VB compiler is a lot less memory hungry under 32 bit. I did try to repro the issue with a medium sized xaml file (400KB) which does contain 1000 variables and 1000 if which can be represented by C# code like this: string Var1; string Var2; ... string Var1000; if (!String.IsNullOrEmpty(Var1) ) { Console.WriteLine(“Var1”); } if (!String.IsNullOrEmpty(Var2) ) { Console.WriteLine(“Var2”); } ....   Since WF is based on VB.NET expressions you are bound to the hosted VB.NET compiler which does result in (x64) 140 MB of private bytes which is ca. 140 KB for each if clause which is quite a lot if you think about the actually present functionality. But there is hope. .NET 4.5 does allow now C# expressions for WF which is a major step forward for all C# lovers. I did create some simple patcher to “cross compile” my xaml to C# expressions. Lets look at the result: C# Expressions VB Expressions x86 x86 On my home machine I have only 32 bit which gives you quite exactly half of the memory consumption under 64 bit. C# expressions are 10 times more memory hungry than VB.NET expressions! I wanted to do more with less memory but instead it did consume a magnitude more memory. That is surprising to say the least. The workflow does initialize in about the same time under x64 and x86 where the VB code does it in 2s whereas the C# version needs 18s. Also nearly ten times slower. That is a too high price to pay for any bigger sized xaml workflow to convert from VB.NET to C# expressions. If I do reduce the number of expressions to 500 then it does need 400MB which is about half of the memory. It seems that the cost per if does rise linear with the number of total expressions in a xaml workflow.  Expression Language Cost per IF Startup Time C# 1000 Ifs x64 1,5 MB 18s C# 500 Ifs x64 750 KB 9s VB 1000 Ifs x64 140 KB 2s VB 500 Ifs x64 70 KB 1s Now we can directly compare two MS implementations. It is clear that the VB.NET compiler uses the same underlying structure but it has much higher offset compared to the highly inefficient C# expression compiler. I have filed a connect bug here with a harsher wording about recent advances in memory consumption. The funniest thing is that one MS employee did give an Azure AppFabric demo around early 2011 which was so slow that he needed to investigate with xperf. He was after startup time and the call stacks with regards to VB.NET expression compilation were remarkably similar. In fact I only found this post by googling for parts of my call stacks. … “C# expressions will be coming soon to WF, and that will have different performance characteristics than VB” … What did he know Jan 2011 what I did no know until today? ;-). He knew that C# expression will come but that they will not be automatically have better footprint. It is about time to fix that. In its current state C# expressions are not usable for bigger workflows. That also explains the headline for today. You can cheat startup time by prestarting workflows so that the demo looks nice and snappy but it does hurt scalability a lot since you do need much more memory than necessary. I did find the stacks by enabling virtual allocation tracking within XPerf which is still the best tool out there. But first you need to look at your process to check where the memory is hiding: For the C# Expression compiler you do not need xperf. You can directly dump the managed heap and check with a profiler of your choice. But if the allocations are happening on the Private Data ( VirtualAlloc ) you can find it with xperf. There is a nice video on channel 9 explaining VirtualAlloc tracking it in greater detail. If your data allocations are on the Heap it does mean that the C/C++ runtime did create a heap for you where all malloc, new calls do allocate from it. You can enable heap tracing with xperf and full call stack support as well which is doable via xperf like it is shown also on channel 9. Or you can use WPRUI directly: To make “Heap Usage” it work you need to set for your executable the tracing flags (before you start it). For example devenv.exe HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\devenv.exe DWORD TracingFlags 1 Do not forget to disable it after you did complete profiling the process or it will impact the startup time quite a lot. You can with xperf attach directly to a running process and collect heap allocation information from a gone wild process. Very handy if you need to find out what a process was doing which has arrived in a funny state. “VirtualAlloc usage” does work without explicitly enabling stuff for a specific process and is always on machine wide. I had issues on my Windows 7 machines with the call stack collection and the latest Windows 8.1 Performance Toolkit. I was told that WPA from Windows 8.0 should work fine but I do not want to downgrade.

    Read the article

  • ArchBeat Link-o-Rama for 10-24-2012

    - by Bob Rhubart
    Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company's position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as "TEN." My score? Never mind. Book: DevOps for Developers | The Java Source The subject of DevOps has come up in a couple of recent OTN ArchBeat Podcasts, so it's somewhat serendipitous that Tori Weildt's recent blog post offers an overview of Java Champion Michael Hutterman's new book, DevOps for Developers, now available from Apress. Bring Your Own Device (BYOD) : Context is everything… | The ORACLE-BASE Blog BOYD is a factor in the evolution of IT, but in what context? "The real IT work in companies is still being done on PCs," says Oracle ACE Director Tim Hall. "Yes, you can use a cloud service on your phone, but look around the office and you will see those cloud services are actually being used by people on PCs." Oracle in the Cloud: Oracle EBusiness Suite sizing | Tom Laszewski Cloud expert Tom Laszewski shares several technical resources that will be helpful for sizing of Oracle EBusiness Suite. Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Author and expert Yuli Vasiliev shows you how take advantage of multiple Oracle WebLogic Server instances grouped into a cluster to maximize scalability and availability. Webcast: Reduce Costs with Oracle's Database Storage Management Watch this! Join Oracle experts Kevin Jernigan and Margaret Hamburger for an interactive webcast in which you'll learn how Oracle's Database Storage Management can reduce storage costs and management complexity while improving query performance to meet service-level agreements and compliance requirements. Event Date: Tuesday, November 6, 2012 Event Time: 10 a.m. PT/1 p.m. ET Thought for the Day "Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves." — Alan Kay Source: softwarequotes.com

    Read the article

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • MySQL Enterprise Monitor 3.0.3 Is Now Available

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.3 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud with the November update in about 1 week. This is a maintenance release that fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

  • Extra, Extra, Read All About It- Offer Ends Soon!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Start spreading the news... Your partner news of course by submitting all interesting presentation ideas to the Oracle OpenWorld 2012 Call for Papers. Though you may not be able to serenade your customers with a voice like Sinatra’s, you can still get their attention by sharing your customer solutions, highlighting your achievements and attempting your best “Old Blue Eyes” impersonation. This call for papers will end April 9th, 2012 so don’t be a stranger in the night; instead fly your company to the moon and back by getting those papers in. May luck be a lady or simply on your side, as all accepted submission speakers will receive a complimentary pass to the event they have been accepted for. Yes you’re lovely, so why wait any longer? Join the Oracle OpenWorld 2012 ‘Rat Pack’ today by watching the video below or submitting to the call for papers. The best is yet to come, The OPN Communications Team

    Read the article

  • Finding "Stuff" In OUM

    - by Dave Burke
    One of the first questions people asked when they start using the Oracle Unified Method (OUM) is “how do I find X ?” Well of course no one is really looking for “X”!! but typically an OUM user might know the Task ID, or part of the Task Name, or maybe they just want to find out if there is any content within OUM that is related to a couple of keys words they have in their mind. Here are three quick tips I give people: 1. Open up one of the OUM Views, then click “Expand All”, and then use your Browser’s search function to locate a key Word. For example, Google Chrome or Internet Explorer: <CTRL> F, then type in a key Word, i.e. Architecture This is fast and easy option to use, but it only searches the current OUM page 2. Use the PDF view of OUM Open up one of the OUM Views, and then click the PDF View button located at the top of the View. Depending on your Browser’s settings, the PDF file will either open up in a new Window, or be saved to your local machine. In either case, once the PDF file is open, you can use the built in PDF search commands to search for key words across a large portion of the OUM Method Pack. This is great option for searching the entire Full Method View of OUM, including linked HTML pages, however the search will not included linked Documents, i.e. Word, Excel. 3. Use your operating systems file index to search for key words This is my favorite option, and one I use virtually every day. I happen to use Windows Search, but you could also use Google Desktop Search, of Finder on a MAC. All you need to do (on a Windows machine) is to make sure your local OUM folder structure is included in the Windows Index. Go to Control Panel, select Indexing Options, and ensure your OUM folder is included in the Index, i.e. C:/METHOD/OM40/OUM_5.6 Once your OUM folders are indexed, just open up Windows Search (or Google Desktop Search) and type in your key worlds, i.e. Unit Testing The reason I use this option the most is because the Search will take place across the entire content of the Indexed folders, included linked files. Happy searching!

    Read the article

  • openGL textures in bitmap mode

    - by evenex_code
    For reasons detailed here I need to texture a quad using a bitmap (as in, 1 bit per pixel, not an 8-bit pixmap). Right now I have a bitmap stored in an on-device buffer, and am mounting it like so: glBindBuffer(GL_PIXEL_UNPACK_BUFFER, BFR.G[(T+1)%2]); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, W, H, 0, GL_COLOR_INDEX, GL_BITMAP, 0); The OpenGL spec has this to say about glTexImage2D: "If type is GL_BITMAP, the data is considered as a string of unsigned bytes (and format must be GL_COLOR_INDEX). Each data byte is treated as eight 1-bit elements..." Judging by the spec, each bit in my buffer should correspond to a single pixel. However, the following experiments show that, for whatever reason, it doesn't work as advertised: 1) When I build my texture, I write to the buffer in 32-bit chunks. From the wording of the spec, it is reasonable to assume that writing 0x00000001 for each value would result in a texture with 1-px-wide vertical bars with 31-wide spaces between them. However, it appears blank. 2) Next, I write with 0x000000FF. By my apparently flawed understanding of the bitmap mode, I would expect that this should produce 8-wide bars with 24-wide spaces between them. Instead, it produces a white 1-px-wide bar. 3) 0x55555555 = 1010101010101010101010101010101, therefore writing this value ought to create 1-wide vertical stripes with 1 pixel spacing. However, it creates a solid gray color. 4) Using my original 8-bit pixmap in GL_BITMAP mode produces the correct animation. I have reached the conclusion that, even in GL_BITMAP mode, the texturer is still interpreting 8-bits as 1 element, despite what the spec seems to suggest. The fact that I can generate a gray color (while I was expecting that I was working in two-tone), as well as the fact that my original 8-bit pixmap generates the correct picture, support this conclusion. Questions: 1) Am I missing some kind of prerequisite call (perhaps for setting a stride length or pack alignment or something) that will signal to the texturer to treat each byte as 8-elements, as it suggests in the spec? 2) Or does it simply not work because modern hardware does not support it? (I have read that GL_BITMAP mode was deprecated in 3.3, I am however forcing a 3.0 context.) 3) Am I better off unpacking the bitmap into a pixmap using a shader? This is a far more roundabout solution than I was hoping for but I suppose there is no such thing as a free lunch.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • "Error in the Site Data Web Service." when performing crawl

    - by Janis Veinbergs
    Installed SharePoint Services v3 (SP2, october 2009 cumulative updates, Language Pack), attached to a content database I had previously (all works). Installed Search server 2008 Express (with language pack) on top of WSS and crawl does not work. However it works for newly created web application + database. Was playing around with accounts, permissions to try get it working. Currently I have WSS_Crawler account with such permissions: Office Search Server runs with WSS_Crawler account Config database has read permissions for WSS_Crawler Content database has read permissions for WSS_Crawler WSS_Crawler is owner of search database. Added WSS_Crawler to SQL server browser user group and administrator Yes, i'v given more permissions than needed, but it doesn't even work with that and i don't know if its permission problem or what. Crawl log says there is Error in the Site Data Web Service., nothing more. There were known issues with a similar error: Error in the Site Data Web Service. (Value does not fall within the expected range.), but this is not the case as thats an old issue and i hope it has been included in SP2... Logs are from olders to newest (descending order). They don't appear to be very helpful. Crawl log http://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM sts3://serveris/contentdbid={55180cfa-9d2d-46e4... Crawled Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris/test Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM http://serveris Error in the Site Data Web Service. Local Office SharePoint Server sites 3/15/2010 9:39 AM EventLog No errors in EventLog, just some Information events that Office Server Search provides The search service started. Successfully stored the application configuration registry snapshot in the database. Context: Application 'SharedServices Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: Portal_Content. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog Portal_Content. Component: da1288b2-4109-4219-8c0c-3a22802eb842 Catalog: AnchorProject. A master merge was started due to an external request. Component: da1288b2-4109-4219-8c0c-3a22802eb842 A master merge has completed for catalog AnchorProject. ULS Log Just some information, but no exceptions, unexpected errors 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Insert crawl 771 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Request Start Crawl 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:28.28 mssearch.exe (0x1B2C) 0x0E8C Search Server Common GatherStatus 0 Monitorable Advise status change 1, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:28.28 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 8wn6 Information A full crawl was started on 'Local Office SharePoint Server sites' by BALTICOVO\janis.veinbergs. 03/15/2010 09:03:28.43 mssdmn.exe (0x1750) 0x10F8 ULS Logging Unified Logging Service 8wsv High ULS Init Completed (mssdmn.exe, Microsoft.Office.Server.Native.dll) 03/15/2010 09:03:30.48 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0v Medium Create CCache 03/15/2010 09:03:30.56 mssdmn.exe (0x1750) 0x09C0 Search Server Common MS Search Indexing 8z0z Medium Create CUserCatalogCache 03/15/2010 09:03:32.06 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 90ge Medium SQL: dbo.proc_MSS_PropagationGetQueryServers 03/15/2010 09:03:32.09 w3wp.exe (0x1D98) 0x0958 Search Server Common MS Search Administration 7phq High GetProtocolConfigHelper failed in GetNotesInterface(). 03/15/2010 09:03:34.26 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:35.92 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.32 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:37.23 mssdmn.exe (0x1750) 0x1850 Search Server Common MS Search Indexing 8z14 Medium Test TRACE (NULL):(null), (NULL)(null), (CrLf): , end 03/15/2010 09:03:39.04 mssearch.exe (0x1B2C) 0x16A4 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x0B24 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=test/siteid={390611b2-55f3-4a99-8600-778727177a28}/weburl=/webid={fb0e4bff-65d5-4ded-98d5-fd099456962b}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.87 mssdmn.exe (0x1750) 0x1260 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:40.98 mssdmn.exe (0x1750) 0x1260 Search Server Common MS Search Indexing 7how Monitorable GetWebDefaultPage fail. error 2147755542, strWebUrl http://serveris/test 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::GetSubWebListItemAccessURL GetAccessURL failed: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:505 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init: GetSubWebListItemAccessURL failed. Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3acc.cxx Line:348 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Accessor::Init fails, Url sts3://serveris/siteurl=/siteid={505443fa-ef12-4f1e-a04b-d5450c939b78}/weburl=/webid={c5a4f8aa-9561-4527-9e1a-b3c23200f11c}, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:243 03/15/2010 09:03:41.90 mssdmn.exe (0x1750) 0x0B24 Search Server Common PHSts 0 Monitorable CSTS3Handler::CreateAccessorExB: Return error to caller, hr=80042616 - File:d:\office\source\search\search\gather\protocols\sts3\sts3handler.cxx Line:261 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 24, project Portal_Content, crawl 771 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Remove crawl 771 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:43.26 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project Portal_Content, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Insert crawl 772 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:44.65 mssearch.exe (0x1B2C) 0x1804 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project Portal_Content - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:44.82 mssearch.exe (0x1B2C) 0x1DD0 Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:44.95 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.51 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:46.39 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.01 mssearch.exe (0x1B2C) 0x1C6C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.87 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.29 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.53 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.67 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.82 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.84 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:49.90 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 772 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:51.00 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:51.42 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Remove crawl 772 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Insert crawl 773 to inprogress queue hr 0x00000000 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6591 03/15/2010 09:03:52.96 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Removed start crawl request from Queue 0, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2942 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Request Start Crawl 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2875 03/15/2010 09:03:55.29 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GatherStatus 0 Monitorable Advise status change 0, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x1CCC Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:55.37 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:56.71 mssearch.exe (0x1B2C) 0x1E4C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:56.78 mssearch.exe (0x1B2C) 0x0750 Search Server Common GatherStatus 0 Monitorable Advise status change 12, project AnchorProject, crawl -1 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.40 mssearch.exe (0x1B2C) 0x155C Search Server Common GathererSql 0 Monitorable CGatherer::LoadTransactionsFromCrawlInternal Flush anchor, count 0 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4943 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x155C Search Server Common GatherStatus 0 Monitorable Advise status change 4, project AnchorProject, crawl 773 - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:4853 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Remove crawl 773 from inprogress queue - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:6722 03/15/2010 09:03:58.89 mssearch.exe (0x1B2C) 0x1130 Search Server Common GatherStatus 0 Monitorable Unlock Queue, project AnchorProject - File:d:\office\source\search\search\gather\server\gatherobj.cxx Line:2922 What could be wrong here - any clues?

    Read the article

  • SCOM, Server 2008 and SQL Server 2008

    - by Jacques
    Hi there, I'm trying to setup SCOM(System Center Operations Manager 2007 (SCOM) – Platform Monitoring) on my Server 2008 machine using SQL Server 2008 running on the same machine. When I check my prerequisites I get problem on SQL and Active Directory components. (I'm running SQL server 2008 and Server 2008 with active directory not installed) Errors: 1.Microsoft SQL Server 2005 Service Pack 1 required. Details: SQL Server 2005 SP1 is the next version of SQL Server. SQL Server 2005 Enterprise Edition, is a complete data and analysis platform for large mission-critical business applications. The link provided in the resolution column is a trial version of the product and is not supported by the Microsoft SQL Server team In order to install active directory needs to be present. Details:Setup failed to verify the presence of Active Directory for this server. I've got a couple of questions I need answering, hope someone can help. Do I need to install Active Directory for SCOM to work? Can I run SCOM with an SQL 2008 Database? How do I get pass these problems?

    Read the article

  • Git apache : unable to push via http

    - by GlinesMome
    I have to setup a server which can allow http vcs management (such as git and svn). svn support works well, but I have some trouble with git. Actual configuration: CentOS 5 Apache 2.2.8 Git 1.7.4.1 The /etc/httpd/conf/httpd.conf content: ServerTokens OS ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 10 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Listen 80 LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_alias_module modules/mod_authn_alias.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule ldap_module modules/mod_ldap.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule include_module modules/mod_include.so LoadModule log_config_module modules/mod_log_config.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule expires_module modules/mod_expires.so LoadModule deflate_module modules/mod_deflate.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule mime_module modules/mod_mime.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so LoadModule suexec_module modules/mod_suexec.so LoadModule disk_cache_module modules/mod_disk_cache.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule mem_cache_module modules/mod_mem_cache.so LoadModule cgi_module modules/mod_cgi.so LoadModule mysql_auth_module modules/mod_auth_mysql.so LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.2/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.2 PassengerRuby /usr/bin/ruby Include conf.d/*.conf User apache Group apache ServerAdmin aedi.admin@domain ServerName s1.domain UseCanonicalName Off DocumentRoot "/data/www/" <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory "/data/www/"> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> <IfModule mod_userdir.c> UserDir disable </IfModule> DirectoryIndex index.html index.html.var AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> TypesConfig /etc/mime.types DefaultType text/plain <IfModule mod_mime_magic.c> MIMEMagicFile conf/magic </IfModule> HostnameLookups Off ErrorLog logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent CustomLog logs/access_log combined ServerSignature On Alias /icons/ "/var/www/icons/" <Directory "/var/www/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </IfModule> </Directory> <IfModule mod_dav_fs.c> DAVLockDB /var/lib/dav/lockdb ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable AddIconByEncoding (CMP,/icons/compressed.gif) x-compress x-gzip AddIconByType (TXT,/icons/text.gif) text/* AddIconByType (IMG,/icons/image2.gif) image/* AddIconByType (SND,/icons/sound2.gif) audio/* AddIconByType (VID,/icons/movie.gif) video/* AddIcon /icons/binary.gif .bin .exe AddIcon /icons/binhex.gif .hqx AddIcon /icons/tar.gif .tar AddIcon /icons/world2.gif .wrl .wrl.gz .vrml .vrm .iv AddIcon /icons/compressed.gif .Z .z .tgz .gz .zip AddIcon /icons/a.gif .ps .ai .eps AddIcon /icons/layout.gif .html .shtml .htm .pdf AddIcon /icons/text.gif .txt AddIcon /icons/c.gif .c AddIcon /icons/p.gif .pl .py AddIcon /icons/f.gif .for AddIcon /icons/dvi.gif .dvi AddIcon /icons/uuencoded.gif .uu AddIcon /icons/script.gif .conf .sh .shar .csh .ksh .tcl AddIcon /icons/tex.gif .tex AddIcon /icons/bomb.gif core AddIcon /icons/back.gif .. AddIcon /icons/hand.right.gif README AddIcon /icons/folder.gif ^^DIRECTORY^^ AddIcon /icons/blank.gif ^^BLANKICON^^ DefaultIcon /icons/unknown.gif ReadmeName README.html HeaderName HEADER.html AddLanguage ca .ca AddLanguage cs .cz .cs AddLanguage da .dk AddLanguage de .de AddLanguage el .el AddLanguage en .en AddLanguage eo .eo AddLanguage es .es AddLanguage et .et AddLanguage fr .fr AddLanguage he .he AddLanguage hr .hr AddLanguage it .it AddLanguage ja .ja AddLanguage ko .ko AddLanguage ltz .ltz AddLanguage nl .nl AddLanguage nn .nn AddLanguage no .no AddLanguage pl .po AddLanguage pt .pt AddLanguage pt-BR .pt-br AddLanguage ru .ru AddLanguage sv .sv AddLanguage zh-CN .zh-cn AddLanguage zh-TW .zh-tw LanguagePriority en ca cs da de el eo es et fr he hr it ja ko ltz nl nn no pl pt pt-BR ru sv zh-CN zh-TW ForceLanguagePriority Prefer Fallback AddDefaultCharset UTF-8 AddType application/x-compress .Z AddType application/x-gzip .gz .tgz AddHandler type-map var AddType text/html .shtml AddOutputFilter INCLUDES .shtml Alias /error/ "/var/www/error/" <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> </IfModule> </IfModule> BrowserMatch "Mozilla/2" nokeepalive BrowserMatch "MSIE 4\.0b2;" nokeepalive downgrade-1.0 force-response-1.0 BrowserMatch "RealPlayer 4\.0" force-response-1.0 BrowserMatch "Java/1\.0" force-response-1.0 BrowserMatch "JDK/1\.0" force-response-1.0 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully BrowserMatch "MS FrontPage" redirect-carefully BrowserMatch "^WebDrive" redirect-carefully BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully BrowserMatch "^gnome-vfs/1.0" redirect-carefully BrowserMatch "^XML Spy" redirect-carefully BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully NameVirtualHost *:80 NameVirtualHost *:443 <VirtualHost *:80> DocumentRoot /data/www/s1/html ServerName s1.asso.domain ErrorLog logs/s1.error.log </VirtualHost> <VirtualHost *:80> DocumentRoot /data/www/s2/old ServerName s2.domain ErrorLog logs/s2.error.log RailsBaseURI /blog <Directory /data/www/s2/html/blog> Options -MultiViews </Directory> </VirtualHost> <VirtualHost *:443> DocumentRoot /data/www/s2/html ServerName s2.domain ErrorLog logs/s2.error.log RailsBaseURI /blog <Directory /data/www/s2/html/blog> Options -MultiViews </Directory> </VirtualHost> The /etc/httpd/conf.d/git.conf content: Alias /git /data/www/s2/git <Directory /data/www/s2/git> Options +Indexes DAV on SSLRequireSSL </Directory> Fine, every repository are created by the same way: git --bare init "$1.git" && cd "$1.git" && git update-server-info && chmod -R 770 . && cd .. && git clone `pwd`/"$1.git" && cd "$1" && echo 42 > answer && git add . && git commit -m "Initial commit" && git push origin master && git rm answer && git commit -a -m "Clean repository" && git push && cd .. && rm -Rf "$1" Then, on the client side, I try: ~ $ git clone https://s2.domain/git/repo.git Cloning into 'repo'... warning: You appear to have cloned an empty repository. ~ $ cd repo repo $ echo 42 > answer && git add . && git commit -m "init" && git push origin master [master (root-commit) a2aadb1] init 1 file changed, 1 insertion(+) create mode 100644 answer Fetching remote heads... refs/ refs/heads/ refs/tags/ updating 'refs/heads/master' from 0000000000000000000000000000000000000000 to a2aadb1772e12104ce358f7ff9a11db5d93ead7d sending 3 objects MOVE d81cc0710eb6cf9efd5b920a8453e1e07157b6cd failed, aborting (22/502) MOVE 2c186ad49fa24695512df5e41cb5e6f2d33c119b failed, aborting (22/502) MOVE a2aadb1772e12104ce358f7ff9a11db5d93ead7d failed, aborting (22/502) Updating remote server info fatal: git-http-push failed The apache associated logs: my.ip - - [21/Sep/2012:16:19:19 +0200] "GET /git/repo.git/info/refs?service=git-upload-pack HTTP/1.1" 200 - "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:19 +0200] "GET /git/repo.git/HEAD HTTP/1.1" 200 23 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:48 +0200] "GET /git/repo.git/info/refs?service=git-receive-pack HTTP/1.1" 200 - "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "GET /git/repo.git/HEAD HTTP/1.1" 200 23 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/ HTTP/1.1" 207 569 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "HEAD /git/repo.git/info/refs HTTP/1.1" 200 - "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "HEAD /git/repo.git/objects/info/packs HTTP/1.1" 200 - "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MKCOL /git/repo.git/info/ HTTP/1.1" 405 336 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "LOCK /git/repo.git/info/refs HTTP/1.1" 200 475 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "GET /git/repo.git/objects/info/packs HTTP/1.1" 200 1 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/refs/ HTTP/1.1" 207 2608 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/refs/heads/ HTTP/1.1" 207 941 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/refs/tags/ HTTP/1.1" 207 940 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MKCOL /git/repo.git/refs/ HTTP/1.1" 405 336 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MKCOL /git/repo.git/refs/heads/ HTTP/1.1" 405 342 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "LOCK /git/repo.git/refs/heads/master HTTP/1.1" 200 475 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/objects/a2/ HTTP/1.1" 404 317 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/objects/2c/ HTTP/1.1" 207 4565 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/objects/d8/ HTTP/1.1" 207 4565 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PUT /git/repo.git/objects/d8/1cc0710eb6cf9efd5b920a8453e1e07157b6cd_20ca3a58daa09e54112968cbd4e86580b6301074 HTTP/1.1" 201 373 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MKCOL /git/repo.git/objects/a2/ HTTP/1.1" 201 296 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PUT /git/repo.git/objects/2c/186ad49fa24695512df5e41cb5e6f2d33c119b_20ca3a58daa09e54112968cbd4e86580b6301074 HTTP/1.1" 201 373 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MOVE /git/repo.git/objects/d8/1cc0710eb6cf9efd5b920a8453e1e07157b6cd_20ca3a58daa09e54112968cbd4e86580b6301074 HTTP/1.1" 502 341 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MOVE /git/repo.git/objects/2c/186ad49fa24695512df5e41cb5e6f2d33c119b_20ca3a58daa09e54112968cbd4e86580b6301074 HTTP/1.1" 502 341 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PUT /git/repo.git/objects/a2/aadb1772e12104ce358f7ff9a11db5d93ead7d_20ca3a58daa09e54112968cbd4e86580b6301074 HTTP/1.1" 201 373 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "MOVE /git/repo.git/objects/a2/aadb1772e12104ce358f7ff9a11db5d93ead7d_20ca3a58daa09e54112968cbd4e86580b6301074 HTTP/1.1" 502 341 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "UNLOCK /git/repo.git/refs/heads/master HTTP/1.1" 204 - "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/refs/ HTTP/1.1" 207 2608 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/refs/heads/ HTTP/1.1" 207 941 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "PROPFIND /git/repo.git/refs/tags/ HTTP/1.1" 207 940 "-" "git/1.7.11.4" my.ip - - [21/Sep/2012:16:19:49 +0200] "UNLOCK /git/repo.git/info/refs HTTP/1.1" 204 - "-" "git/1.7.11.4" I have tried many configurations (even smart http from progit), but a major part of them consider the fact that they have a dedicated domain, but I'm in a sub-directory, so I can't apply these examples. Have you got an idea of the problem? have you got solutions? have you got configuration example with non-root directory? For your help, In advance, Thanks.

    Read the article

  • System State Backups using NTbackup fail with error 0x800423f4 (relating to volume shadow copy)

    - by Paul Zimmerman
    We have a Windows Server 2003 R2 running Service Pack 2. It is a domain controller (Global Catalog) and our main internal DNS server. We run a System State backup of the machine to back up Active Directory information and save the backup to a different server. This server has a single drive (C:), and we do have Shadow Copies enabled for the volume (which are completing successfully). The System State Backup is now failing with the following listed in the backup logs: Volume shadow copy creation: Attempt 1. "Event Log Writer" has reported an error 0x800423f4. This is part of System State. The backup cannot continue. Error returned while creating the volume shadow copy:800423f4 Aborting Backup. The operation did not successfully complete. When doing a vssadmin list writers, we sometimes get the following reported for the Event Log Writer (other times it says that it is in the state of "[1] Stable" with "No error"): Writer name: 'Event Log Writer' Writer Id: {eee8c692-67ed-4250-8d86-390603070d00} Writer Instance Id: {c7194e96-868a-49e5-ba99-89b61977753c} State: [8] Failed Last error: Retryable error We have tried disabling the event log service via the registry, rebooting, deleting the event log files from the drive, then re-enabling the service via the registry and rebooting, but this didn't seem to solve the issue. We also get an error message when in the event viewer when trying to open the log for the "File Replication Service" of "Unable to complete the operation on 'File Replication Service'. The security descriptor structure is invalid." I have searched the error via Google and tried a number of different things, but nothing has seemed to help. Any suggestions on what we might try to get the Event Log Writer to behave would be greatly appreciated!

    Read the article

  • Error cloning gitosis-admin on new setup

    - by michaelmior
    I have the following in my gitosis.conf. (Created via gitsosis-init < id_rsa.pub with the key from my laptop) [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = michael@laptop When I try git clone git@SERVER:gitsos-admin.git, I get the following errors: Initialized empty Git repository in /home/michael/gitsos-admin/.git/ DEBUG:gitosis.serve.main:Got command "git-upload-pack 'gitsos-admin.git'" DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'writable' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'writeable' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' DEBUG:gitosis.access.haveAccess:Access check for 'michael@laptop' as 'readonly' on 'gitsos-admin.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'gitsos-admin.git', new value 'gitsos-admin' DEBUG:gitosis.group.getMembership:found 'michael@laptop' in 'gitosis-admin' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I know my key is being accepted because I have tried logging in via SSH and although a terminal won't be allocated, the authorization works.

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • How do I identify and fix the cause of transaction log growth on SIMPLE recovery model databases?

    - by Stuart B
    I recently upgraded our SQL Server 2008 installations to service pack 2. One of our databases is on the simple recovery model, but its transaction log is growing extremely fast. The path I'm currently investigating is that we have a transaction somewhere out there stuck in active state. Here is why: select name, recovery_model_desc, log_reuse_wait_desc from sys.databases where name in ('SimpleDB') name recovery_model_desc log_reuse_wait_desc SimpleDB SIMPLE ACTIVE_TRANSACTION When I check my active transactions, I get the following. Note that I installed SP2 and restarted our server on 12/25 at around noonish. select transaction_id, name, transaction_begin_time, transaction_type from sys.dm_tran_active_transactions transaction_id name transaction_begin_time transaction_type 233 worktable 2010-12-25 12:44:29.283 2 236 worktable 2010-12-25 12:44:29.283 2 238 worktable 2010-12-25 12:44:29.283 2 240 worktable 2010-12-25 12:44:29.283 2 243 worktable 2010-12-25 12:44:29.283 2 245 worktable 2010-12-25 12:44:29.283 2 62210 tran_sp_MScreate_peer_tables 2010-12-25 12:45:00.880 1 55422856 user_transaction 2010-12-28 16:41:56.703 1 55422889 SELECT 2010-12-28 16:41:57.303 2 470 LobStorageProviderSession 2010-12-25 12:44:30.510 2 Note that according to the documentation a transaction_type of 1 means read/write, and 2 means read-only. So, my line of thinking is that the trans_sp_MScreate_peer_tables transaction is stuck for some reason and holding up transaction log truncation. Is this a plausible scenario? Correct me if my line of thinking is off, as I'm not a SQL Server expert. If this is correct, how do I erase that transaction so that my transaction log is truncated as usual?

    Read the article

  • No video playback when using home sharing on iTunes for Windows

    - by Diago
    I recently configured iTunes Home Sharing on my home network. My WHS server is the central library sharing both my Music and Video files. All my videos are encoded into H.264/MP4 format. All my machines are authorized and has access to Home Sharing. All my machines are running the latest version of iTunes 9. On my Snow Leopard machines iTunes happily plays the videos and music with no issues. On the Windows 7 machines however the play button fades for about 2 seconds when selecting a shared video and then nothing happens. The Windows machines are running the latest Community Codec Pack. Music sharing works perfectly on the Windows machines. When accessing the videos through the native WHS media connect sharing as well as through the file share I can play them perfectly on the Windows 7 machines. When adding the file to the iTunes library on the Windows 7 machine it also plays perfectly. Any advice, ideas or suggestions as to how to play the videos I have shared through Home Sharing on Windows 7?

    Read the article

  • Sharepoint db issue after DB move to SQL 08

    - by JohnyV
    Recently we have moved our sharepoint 2007 db from sql 2000 server to 2008 x64 SQL server. All seems well, however there is a problem where the sql server stops running and the service has to be restarted. The errors mention insufficient internal memory etc. I have tried to start the db using -g384 which is the default in sql 2000 but 256 is default for 2008 I believe. This has not rectified the issue. I was advised that perhaps the issue may be rectified by upgrading to wss 3.0 sp2 however When I have tried to install this i get another error post sp2 update and have to refer back to a vm snapshot. The error after the service pack is Server error: http://go.microsoft.com/fwlink?LinkID=96177 So I guess I have a few questions How can I fix the first issue and the 2nd issue. I have checked out many forums and posts and have tried a few things and still get no joy. Any assistance would be great. UPDATE I have fixed the Server error: http://go.microsoft.com/fwlink?LinkID=96177 the i needed to run the wss sp2 as well as the office servers sp2 then the config wizard then the moss configuration worked. The errors I am getting in SQL are SQL Server was unable to run a new system task, either because there is insufficient memory or the number of configured sessions exceeds the maximum allowed in the server. Verify that the server has adequate memory. Use sp_configure with option 'user connections' to check the maximum number of user connections allowed. Use sys.dm_exec_sessions to check the current number of sessions, including user processes. A read operation on a large object failed while sending data to the client. A common cause for this is if the application is running in READ UNCOMMITED isolation level. The connection will be terminated. There is insufficient system memory in resource pool 'internal' to run this query. These errors are by a user that was created as a service for sharepoint.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >