Search Results

Search found 71684 results on 2868 pages for 'mp gt'.

Page 162/2868 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Take Control Of Web Control ClientID Values in ASP.NET 4.0

    Each server-side Web control in an ASP.NET Web Forms application has an <code>ID</code> property that identifies the Web control and is name by which the Web control is accessed in the code-behind class. When rendered into HTML, the Web control turns its server-side <code>ID</code> value into a client-side <code>id</code> attribute. Ideally, there would be a one-to-one correspondence between the value of the server-side <code>ID</code> property and the generated client-side <code>id</code>, but in reality things aren't so simple. By default, the rendered client-side <code>id</code> is formed by taking the Web control's <code>ID</code> property and prefixed it with the <code>ID</code>

    Read the article

  • BizTalk 2009 - Naming Guidelines

    - by StuartBrierley
    The following is effectively a repost of the BizTalk 2004 naming guidlines that I have previously detailed.  I have posted these again for completeness under BizTalk 2009 and to allow an element of separation in case I find some reason to amend these for BizTalk 2009. These guidlines should be universal across any version of BizTalk you may wish to apply them to. General Rules All names should be named with a Pascal convention. Project Namespaces For message schemas: [CompanyName].XML.Schemas.[FunctionalName]* Examples:  ABC.XML.Schemas.Underwriting DEF.XML.Schemas.MarshmellowTradingExchange * Donates potential for multiple levels of functional name, such as Underwriting.Dictionary.Valuation For web services: [CompanyName].Web.Services.[FunctionalName] Examples: ABC.Web.Services.OrderJellyBeans For the main BizTalk Projects: [CompanyName].BizTalk.[AssemblyType].[FunctionalName]* Examples: ABC.BizTalk.Mappings.Underwriting ABC.BizTalk.Orchestrations.Underwriting * Donates potential for multiple levels of functional name, such as Mappings.Underwriting.Valuations Assemblies BizTalk Assembly names should match the associated Project Namespace, such as ABC.BizTalk.Mappings.Underwriting. This pertains to the formal assembly name and the DLL name. The Solution name should take the name of the main project within the solution, and also therefore the namespace for that project. Although long names such as this can be unwieldy to work with, the benefits of having the full scope available when the assemblies are installed on the target server are generally judged to outweigh this inconvenience. Messaging Artifacts Artifact Standard Notes Example Schema <DescriptiveName>.xsd   .NET Type name should match, without file extension.    .NET Namespace will likely match assembly name. PurchaseOrderAcknowledge_FF.xsd  or FNMA100330_FF.xsd Property Schema <DescriptiveName>.xsd Should be named to reflect possible common usage across multiple schemas  IspecMessagePropertySchema.xsd UnderwritingOrchestrationKeys.xsd Map <SourceSchema>2<DestinationSchema>.btm Exceptions to this may be made where the source and destination schemas share the majority of the name, such as in mainframe web service maps InstructionResponse2CustomEmailRequest.btm (exception example) AccountCustomerAddressSummaryRequest2MainframeRequest.btm Orchestration <DescriptiveName>.odx   GetValuationReports.odx SendMTEDecisionResponse.odx Send/Receive Pipeline <DescriptiveName>.btp   ValidatingXMLReceivePipeline.btp FlatFileAssembler.btp Receive Port A plainly worded phrase that will clearly explain the function.    FraudPreventionServices LetterProcessing   Receive Location A plainly worded phrase that will clearly explain the function.  ? Do we want to include the transport type here ? Arrears Web Service Send Port Group A plainly worded phrase that will clearly explain the function.   Customer Updates Send Port A plainly worded phrase that will clearly explain the function.    ABCProductUpdater LogLendingPolicyOutput Parties A meaningful name for a Trading Partner. If dealing with multiple entities within a Trading Partner organization, the Organization name could be used as a prefix.   Roles A meaningful name for the role that a Trading Partner plays.     Orchestration Workflow Shapes Shape Standard Notes Example Scopes <DescriptionOfContainedWork> or <DescOfcontainedWork><TxType>   Including info about transaction type may be appropriate in some situations where it adds significant documentation value to the diagram. HandleReportResponse         Receive Receive<MessageName> Typically, MessageName will be the same as the name of the message variable that is being received “into”. ReceiveReportResponse Send Send<MessageName> Typically, MessageName will be the same as the name of the message variable that is being sent. SendValuationDetailsRequest Expression <DescriptionOfEffect> Expression shapes should be named to describe the net effect of the expression, similar to naming a method.  The exception to this is the case where the expression is interacting with an external .NET component to perform a function that overlaps with existing BizTalk functionality – use closest BizTalk shape for this case. CreatePrintXML Decide <DescriptionOfDecision> A description of what will be decided in the “if” branch Report Type? Perform MF Save? If-Branch <DescriptionOfDecision> A (potentially abbreviated) description of what is being decided Mortgage Valuation Yes Else-Branch Else Else-branch shapes should always be named “Else” Else Construct Message (Assign) Create<Message> (for Construct)     <ExpressionDescription> (for expression) If a Construct shape contains a message assignment, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.    The actual message assignment shape contained should be named to describe the expression that is contained. CreateReportDataMV   which contains expression: ExtractReportData Construct Message (Transform) Create<Message> (for Construct)   <SourceSchema>2<DestSchema> (for transform) If a Construct shape contains a message transform, it should be prefixed with “Create” followed by an abbreviated name of the message being assigned.   The actual message transform shape contained should generally be named the same as the called map.  CreateReportDataMV   which contains transform: ReportDataMV2ReportDataMV                 Construct Message (containing multiple shapes)   If a Construct Message shape uses multiple assignments or transforms, the overall shape should be named to communicate the net effect, using no prefix.     Call/Start Orchestration Call<OrchestrationName>   Start<OrchestrationName>     Throw Throw<ExceptionType> The corresponding variable name for the exception type should (often) be the same name as the exception type, only camel-cased. ThrowRuleException, which references the “ruleException” variable.     Parallel <DescriptionOfParallelWork> Parallel shapes should be named by a description of what work will be done in parallel   Delay <DescriptionOfWhatWaitingFor> Delay shapes should be named by a description of what is being waited for.  POAcknowledgeTimeout Listen <DescriptionOfOutcomes> Listen shapes should be named by a description that captures (to the degree possible) all the branches of the Listen shape POAckOrTimeout FirstShippingBid Loop <DescriptionOfLoop> A (potentially abbreviated) description of what the loop is. ForEachValuationReport WhileErrorFlagTrue Role Link   See “Roles” in messaging naming conventions above.   Suspend <ReasonDescription> Describe what action an administrator must take to resume the orchestration.  More detail can be passed to error property – and should include what should be done by the administrator before resuming the orchestration. ReEstablishCreditLink Terminate <ReasonDescription> Describe why the orchestration terminated.  More detail can be passed to error property. TimeoutsExpired Call Rules Call<PolicyName> The policy name may need to be abbreviated. CallLendingPolicy Compensate Compensate or Compensate<TxName> If the shape compensates nested transactions, names should be suffixed with the name of the nested transaction – otherwise it should simple be Compensate. CompensateTransferFunds Orchestration Types Type Standard Notes Example Multi-Part Message Types <LogicalDocumentType>   Multi-part types encapsulate multiple parts.  The WSDL spec indicates “parts are a flexible mechanism for describing the logical abstract content of a message.”  The name of the multi-part type should correspond to the “logical” document type, i.e. what the sum of the parts describes. InvoiceReceipt   (which might encapsulate an invoice acknowledgement and a payment voucher.) Multi-Part Messsage Part <SchemaNameOfPart> Should be named (most often) simply for the schema (or simple type) associated with the part. InvoiceHeader Messages <SchemaName> or <MuliPartMessageTypeName> Should be named based on the corresponding schema type or multi-part message type.  If there is more than one variable of a type, name for its use within the orchestration. ReportDataMV UpdatedReportDataMV Variables <DescriptiveName>   TargetFilePath StringProcessor Port Types <FunctionDescription>PortType Should be named to suggest the nature of an endpoint, with pascal casing and suffixed with “PortType”.   If there will be more than one Port for a Port Type, the Port Type should be named according to the abstract service supplied.   The WSDL spec indicates port types are “a named set of abstract operations and the abstract messages involved” that also encapsulates the message pattern (i.e. one-way, request-response, solicit-response) that all operations on the port type adhere to. ReceiveReportResponsePortType  or CallEAEPortType (This is a two way port, so Receove or Send alone would not be appropriate.  Could have been ProcessEAERequestPortType etc....) Ports <FunctionDescription>Port Should be named to suggest a grouping of functionality, with pascal casing and suffixed with “Port.”  ReceiveReportResponsePort CallEAEPort Correlation types <DescriptiveName> Should be named based on the logical name of what is being used to correlate.  PurchaseOrderNumber Correlation sets <DescriptiveName> Should be named based on the corresponding correlation type.  If there is more than one, it should be named to reflect its specific purpose within the orchestration.   PurchaseOrderNumber Orchestration parameters <DescriptiveName> Should be named to match the caller’s names for the corresponding variables where appropriate.

    Read the article

  • Customize Team Build 2010 – Part 16: Specify the relative reference path

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application Part 16: Specify the relative reference path As I have already blogged about, it is not intuitive how to specify the paths where the build server has to look for references that are stored in Source Control. It is a common practice to store 3rd party libraries in Source Control, so they are available to everyone, everyone uses the same version of the libraries and updating a library can be done centrally. In Team Build 2010 these paths are specified as a parameter for MSBuild. What we will do in this post is building the values for this parameter based on the values in an argument. You are now pretty aware how to customize the build template, so let’s do the modifications in another way. Instead of opening the xaml file in the workflow designer, we open it in the XML editor. You can open it in the XML Editor by either selecting the Open with menu (see the context menu), or by choosing the View code option. To add this functionality we need to: Specify a new argument Add the argument to the metadata Build the absolute paths for the references and add these paths to the MSBuild arguments 1. Specify a new argument Locate at the top of the document the Members (which are the arguments) of the XAML and add the following line <x:Property Name="ReferencePaths" Type="InArgument(s:String[])" /> 2. Add the argument to the metadata Then locate the line <mtbw:ProcessParameterMetadataCollection> and paste the following line <mtbw:ProcessParameterMetadata Category="Misc" Description="The list of reference paths, relative to the root path in the Workspace mapping." DisplayName="Reference paths" ParameterName="ReferencePaths" /> 3. Build the absolute paths for the references and add these paths to the MSBuild arguments Now locate the place where the assignments are done to the variables used in the agent. And add the following lines after the last Assign activity         <Sequence DisplayName="Initialize ReferencePath" sap:VirtualizedContainerService.HintSize="464,428">           <Sequence.Variables>             <Variable x:TypeArguments="x:String" Name="ReferencePathsArgument">               <Variable.Default>                 <Literal x:TypeArguments="x:String" Value="" />               </Variable.Default>             </Variable>           </Sequence.Variables>           <sap:WorkflowViewStateService.ViewState>             <scg:Dictionary x:TypeArguments="x:String, x:Object">               <x:Boolean x:Key="IsExpanded">True</x:Boolean>             </scg:Dictionary>           </sap:WorkflowViewStateService.ViewState>           <ForEach x:TypeArguments="x:String" DisplayName="Iterate through the paths" sap:VirtualizedContainerService.HintSize="287,206" mtbwt:BuildTrackingParticipant.Importance="Low" Values="[ReferencePaths]">             <ActivityAction x:TypeArguments="x:String">               <ActivityAction.Argument>                 <DelegateInArgument x:TypeArguments="x:String" Name="path" />               </ActivityAction.Argument>               <Assign x:TypeArguments="x:String" DisplayName="Build ReferencePath argument" sap:VirtualizedContainerService.HintSize="257,100" mtbwt:BuildTrackingParticipant.Importance="Low"  To="[ReferencePathsArgument]" Value="[If(String.IsNullOrEmpty(ReferencePathsArgument), &quot;&quot;, ReferencePathsArgument + &quot;;&quot;) + IO.Path.Combine(SourcesDirectory, path)]" />             </ActivityAction>           </ForEach>           <Assign DisplayName="Append the reference paths to the MSBuild Arguments" sap:VirtualizedContainerService.HintSize="287,58">             <Assign.To>               <OutArgument x:TypeArguments="x:String">[MSBuildArguments]</OutArgument>             </Assign.To>             <Assign.Value>               <InArgument x:TypeArguments="x:String">[String.Format("{0} /p:ReferencePath=""{1}""", MSBuildArguments, ReferencePathsArgument)]</InArgument>             </Assign.Value>           </Assign>         </Sequence> Now you can use the template to specify the paths relative to SourcesDirectory. You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Dynamic Data Connections

    - by Tim Dexter
    I have had a long running email thread running between Dan and David over at Valspar and myself. They have built some impressive connectivity between their in house apps and BIP using web services. The crux of their problem has been that they have multiple databases that need the same report executed against them. Not such an unusual request as I have spoken to two customers in the last month with the same situation. Of course, you could create a report against each data connection and just run or call the appropriate report. Not too bad if you have two or three data connections but more than that and it becomes a maintenance nightmare having to update queries or layouts. Ideally you want to have just a single report definition on the BIP server and to dynamically set the connection to be used at runtime based on the user or system that the user is in. A quick bit of digging and help from Shinji on the development team and I had an answer. Rather embarassingly, the solution has been around since the Oct 2010 rollup patch last year. Still, I grabbed the latest Jan 2011 patch - check out Note 797057.1 for the latest available patches. Once installed, I used the best web service testing tool I have yet to come across - SoapUI. Just point it at the WSDL and you can check out the available services and their parameters and then test them too. The XML packet has a new dynamic data source entry. You can set you own custom JDBC connection or just specify an existing data source name thats defined on the server. <pub:runReport> <pub:reportRequest> <pub:attributeFormat>xml</pub:attributeFormat> <pub:attributeTemplate>0</pub:attributeTemplate> <pub:byPassCache>true</pub:byPassCache> <pub:dynamicDataSource> <pub:JDBCDataSource> <pub:JDBCDriverClass></pub:JDBCDriverClass> <pub:JDBCDriverType></pub:JDBCDriverType> <pub:JDBCPassword></pub:JDBCPassword> <pub:JDBCURL></pub:JDBCURL> <pub:JDBCUserName></pub:JDBCUserName> <pub:dataSourceName>Conn1</pub:dataSourceName> </pub:JDBCDataSource> </pub:dynamicDataSource> <pub:reportAbsolutePath>/Test/Employee Report/Employee Report.xdo</pub:reportAbsolutePath> </pub:reportRequest> <pub:userID>Administrator</pub:userID> <pub:password>Administrator</pub:password> </pub:runReport> So I have Conn1 and Conn2 defined that are connections to different databases. I can just flip the name, make the WS call and get the appropriate dataset in my report. Just as an example, here's my web service call java code. Just a case of bringing in the BIP java libs to my java project. publicReportServiceService = new PublicReportServiceService(); PublicReportService publicReportService = publicReportServiceService.getPublicReportService_v11(); String userID = "Administrator"; String password = "Administrator"; ReportRequest rr = new ReportRequest(); rr.setAttributeFormat("xml"); rr.setAttributeTemplate("1"); rr.setByPassCache(true); rr.setReportAbsolutePath("/Test/Employee Report/Employee Report.xdo"); rr.setReportOutputPath("c:\\temp\\output.xml"); BIPDataSource bipds = new BIPDataSource(); JDBCDataSource jds = new JDBCDataSource(); jds.setDataSourceName("Conn1"); bipds.setJDBCDataSource(jds); rr.setDynamicDataSource(bipds); try { publicReportService.runReport(rr, userID, password); } catch (InvalidParametersException e) { e.printStackTrace(); } catch (AccessDeniedException e) { e.printStackTrace(); } catch (OperationFailedException e) { e.printStackTrace(); } } Note, Im no java whiz kid or whizzy old bloke, at least not unless Ive had a coffee. JDeveloper has a nice feature where you point it at the WSDL and it creates everything to support your calling code for you. Couple of things to remember: 1. When you call the service, remember to set the bypass the cache option. Forget it and much scratching of your head and taking my name in vain will ensue. 2. My demo actually hit the same database but used two users, one accessed the base tables another views with the same name. For far too long I thought the connection swapping was not working. I was getting the same results for both users until I realized I was specifying the schema name for the table/view in my query e.g. select * from EMP.EMPLOYEES. So remember to have a generic query that will depend entirely on the connection. Its a neat feature if you want to be able to switch connections and only define a single report and call it remotely. Now if you want the connection to be set dynamically based on the user and the report run via the user interface, thats going to be more tricky ... need to think about that one!

    Read the article

  • Adding UCM as a search source in Windows Explorer

    - by kyle.hatlestad
    A customer recently pointed out to me that Windows 7 supports federated search within Windows Explorer. This means you can perform searches to external sources such as Google, Flickr, YouTube, etc right from within Explorer. While we do have the Desktop Integration Suite which offers searching within Explorer, I thought it would be interesting to look into this method which would not require any client software to implement. Basically, federated searching hooks up in Windows Explorer through the OpenSearch protocol. A Search Connector Descriptor file is run and it installs the search provider. The file is a .osdx file which is an OpenSearch Description document. It describes the search provider you are hooking up to along with the URL for the query. If those results can come back as an RSS or ATOM feed, then you're all set. So the first step is to install the RSS Feeds component from the UCM Samples page on OTN. If you're on 11g, I've found the RSS Feeds works just fine on that version too. Next, you want to perform a Quick Search with a particular search term and then copy the RSS link address for that search result. Here is what an example URL might looks like: http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&feedName=search_results&QueryText=%28+%3cqsch%3eoracle%3c%2fqsch %3e+%29&SortField=dInDate&SortOrder=Desc&ResultCount=20&SearchQueryFormat= Universal&SearchProviders=server& Now you want to create a new text file and start out with this information: <?xml version="1.0" encoding="UTF-8"?><OpenSearchDescription xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName></ShortName> <Description></Description> <Url type="application/rss+xml" template=""/> <Url type="text/html" template=""/> </OpenSearchDescription> Enter a ShortName and Description. The ShortName will be the value used when displaying the search provider in Explorer. In the template attribute for the first Url element, enter the URL copied previously. You will then need to convert the ampersand symbols to '&' to make them XML compliant. Finally, you'll want to switch out the search term with '{searchTerms}'. For the second Url element, you can do the same thing except you want to copy the UCM search results URL from the page of results. That URL will look something like: http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&SortField=dInDate&SortOrder=Desc&ResultCount=20&QueryText=%3Cqsch%3Eoracle%3C%2Fqsch%3E&listTemplateId= &ftx=1&SearchQueryFormat=Universal&TargetedQuickSearchSelection= &MiniSearchText=oracle Again, convert the ampersand symbols and replace the search term with '{searchTerms}'. When complete, save the file with the .osdx extension. The completed file should look like: <?xml version="1.0" encoding="UTF-8"?> <OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:ms-ose="http://schemas.microsoft.com/opensearchext/2009/"> <ShortName>Universal Content Management</ShortName> <Description>OpenSearch for UCM via Windows 7 Search Federation.</Description> <Url type="application/rss+xml" template="http://server:16200/cs/idcplg?IdcService=GET_SCS_FEED&amp;feedName=search_results&amp;QueryText=%28+%3Cqsch%3E{searchTerms}%3C%2fqsch%3E+%29&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=200&amp;SearchQueryFormat=Universal"/> <Url type="text/html" template="http://server:16200/cs/idcplg?IdcService=GET_SEARCH_RESULTS&amp;SortField=dInDate&amp;SortOrder=Desc&amp;ResultCount=20&amp;QueryText=%3Cqsch%3E{searchTerms}%3C%2Fqsch%3E&amp;listTemplateId=&amp;ftx=1&amp;SearchQueryFormat=Universal&amp;TargetedQuickSearchSelection=&amp;MiniSearchText={searchTerms}"/> </OpenSearchDescription> After you save the file, simply double-click it to create the provider. It will ask if you want to add the search connector to Windows. Click Add and it will add it to the Searches folder in your user folder as well as your Favorites. Now just click on the search icon and in the upper right search box, enter your term. As you are typing, it begins executing searches and the results will come back in Explorer. Now when you double-click on an item, it will try and download the web viewable for viewing. You also have the ability to save the search, just as you would in UCM. And there is a link to Search On Website which will launch your browser and go directly to the search results page there. And with some tweaks to the RSS component, you can make the results a bit more interesting. It supports the Media RSS standard, so you can pass along the thumbnail of the documents in the results. To enable this, edit the rss_resources.htm file in the RSS Feeds component. In the std_rss_feed_begin resource include, add the namespace 'xmlns:media="http://search.yahoo.com/mrss/' to the rss definition: <rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:media="http://search.yahoo.com/mrss/"> Next, in the rss_channel_item_with_thumb include, below the closing image element, add this element: </images> <media:thumbnail url="<$if strIndexOf(thumbnailUrl, "@t") > 0 or strIndexOf(thumbnailUrl, "@g") > 0 or strIndexOf(thumbnailUrl, "@p") > 0$><$rssHttpHost$><$thumbnailUrl$><$elseif dGif$><$HttpWebRoot$>images/docgifs/<$dGif$><$endif$>" /> <description> This and lots of other tweaks can be done to the RSS component to help extend it for optimum use in Explorer. Hopefully this can get you started. *Note: This post also applies to Universal Records Management (URM).

    Read the article

  • Nant COM Reference [migrated]

    - by user29286
    I'm trying to find the Nant syntax for including a COM reference. The current Nant script with the normal dll reference looks like .. <references> <include name="${external-lib}/System.Interop.AppName.dll" /> The COM reference in the .csproj file looks like this ... <COMReference Include="System.Interop.AppName"> <Guid>{00020813-0000-0000-C000-000000000046}</Guid> <VersionMajor>1</VersionMajor> <VersionMinor>7</VersionMinor> <Lcid>0</Lcid> <WrapperTool>primary</WrapperTool> <Isolated>False</Isolated> </COMReference> What syntax should I use in Nant to swap to the COM reference to do the build ?

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • Client side code snipets

    - by raghu.yadav
    function clientMethodCall(event) { component = event.getSource(); AdfCustomEvent.queue(component, "customEvent",{payload:component.getSubmittedValue()}, true); event.cancel(); } ]]-- <af:document>      <f:facet name="metaContainer">      <af:group>        <!--[CDATA[            <script>                function clientMethodCall(event) {                                       component = event.getSource();                    AdfCustomEvent.queue(component, "customEvent",{payload:component.getSubmittedValue()}, true);                                                     event.cancel();                                    }                 </script> ]]-->      </af:group>    </f:facet>      <af:form>        <af:panelformlayout>          <f:facet name="footer">          <af:inputtext label="Let me spy on you: Please enter your mail password">            <af:clientlistener method="clientMethodCall" type="keyUp">            <af:serverlistener type="customEvent" method="#{customBean.handleRequest}">          </af:serverlistener>bean code    public void handleRequest(ClientEvent event){                System.out.println("---"+event.getParameters().get("payload"));            } tree<af:tree id="tree1" value="#{bindings.DepartmentsView11.treeModel}" var="node" selectionlistener="#{bindings.DepartmentsView11.treeModel.makeCurrent}" rowselection="single">    <f:facet name="nodeStamp">      <af:outputtext value="#{node}">    </af:outputtext>    <af:clientlistener method="expandNode" type="selection">  </af:clientlistener></f:facet>   <f:facet name="metaContainer">        <af:group>          <!--[CDATA[            <script>                function expandNode(event){                    var _tree = event.getSource();                    rwKeySet = event.getAddedSet();                    var firstRowKey;                    for(rowKey in rwKeySet){                       firstRowKey  = rowKey;                        // we are interested in the first hit, so break out here                        break;                    }                    if (_tree.isPathExpanded(firstRowKey)){                         _tree.setDisclosedRowKey(firstRowKey,false);                    }                    else{                        _tree.setDisclosedRowKey(firstRowKey,true);                    }               }        </script> ]]-->        </af:group>      </f:facet>   </af:tree> </af:clientlistener></af:inputtext></f:facet></af:panelformlayout></af:form></af:document> bean code public void handleRequest(ClientEvent event){ System.out.println("---"+event.getParameters().get("payload")); } tree function expandNode(event){ var _tree = event.getSource(); rwKeySet = event.getAddedSet(); var firstRowKey; for(rowKey in rwKeySet){ firstRowKey = rowKey; // we are interested in the first hit, so break out here break; } if (_tree.isPathExpanded(firstRowKey)){ _tree.setDisclosedRowKey(firstRowKey,false); } else{ _tree.setDisclosedRowKey(firstRowKey,true); } } ]]--

    Read the article

  • OS X 10.6 Apply ipfw rules at startup

    - by Michael Irey
    I have a couple of firewall rules I would to like to apply at startup. I have followed the instructions from http://images.apple.com/support/security/guides/docs/SnowLeopard_Security_Config_v10.6.pdf On page 192. However, the rules do not get applied at startup. I am running 10.6.8 NON Server Edition. I can however run: (Which applies the rules correctly) sudo ipfw /etc/ipfw.conf Which results in: 00100 fwd 127.0.0.1,8080 tcp from any to any dst-port 80 in 00200 fwd 127.0.0.1,8443 tcp from any to any dst-port 443 in 65535 allow ip from any to any Here is my /etc/ipfw.conf # To get real 80 and 443 while loading vagrant vbox add fwd localhost,8080 tcp from any to any 80 in add fwd localhost,8443 tcp from any to any 443 in Here is my /Library/LaunchDaemons/ipfw.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>ipfw</string> <key>Program</key> <string>/sbin/ipfw</string> <key>ProgramArguments</key> <array> <string>/sbin/ipfw</string> <string>/etc/ipfw.conf</string> </array> <key>RunAtLoad</key> <true /> </dict> </plist> The permissions of all the files seem to be appropriate: -rw-rw-r-- 1 root wheel 151 Oct 11 14:11 /etc/ipfw.conf -rw-rw-r-- 1 root wheel 438 Oct 11 14:09 /Library/LaunchDaemons/ipfw.plist Any thoughts or ideas on what could be wrong would be very helpful!

    Read the article

  • Recommended method for XML level loading in XNA

    - by David Saltares Márquez
    I want to use Blender as my level designer tool for an XNA game. Using an existing plugin, I can export my levels to DotScene format which is basically an xml file like this one: <scene formatVersion="1.0.0"> <nodes> <node name="scene-staircase.001"> <position x="10.500000" y="1.400000" z="-9.600000"/> <quaternion x="0.000000" y="0.000000" z="-0.000000" w="1.000000"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <entity name="scene-staircase.001" meshFile="staircase.mesh"/> </node> <node name="Lamp.003"> <position x="11.024290" y="5.903862" z="9.658987"/> <quaternion x="-0.284166" y="0.726942" z="0.342034" w="0.523275"/> <scale x="1.000000" y="1.000000" z="1.000000"/> <light name="Spot.003" type="point"> <colourDiffuse r="0.400000" g="0.154618" b="0.145180"/> <colourSpecular r="0.400000" g="0.154618" b="0.145180"/> <lightAttenuation range="5000.0" constant="1.000000" linear="0.033333" quadratic="0.000000"/> </light> </node> ... </nodes> </scene> Using naming conventions I could easily parse the file and load the correspondent in game content. I am new to XNA and I have seen that there are several methods to load XML files into a game like serializing and deserializing. Which one would you recommend?

    Read the article

  • Goodbye XML&hellip; Hello YAML (part 2)

    - by Brian Genisio's House Of Bilz
    Part 1 After I explained my motivation for using YAML instead of XML for my data, I got a lot of people asking me what type of tooling is available in the .Net space for consuming YAML.  In this post, I will discuss a nice tooling option as well as describe some small modifications to leverage the extremely powerful dynamic capabilities of C# 4.0.  I will be referring to the following YAML file throughout this post Recipe: Title: Macaroni and Cheese Description: My favorite comfort food. Author: Brian Genisio TimeToPrepare: 30 Minutes Ingredients: - Name: Cheese Quantity: 3 Units: cups - Name: Macaroni Quantity: 16 Units: oz Steps: - Number: 1 Description: Cook the macaroni - Number: 2 Description: Melt the cheese - Number: 3 Description: Mix the cooked macaroni with the melted cheese Tooling It turns out that there are several implementations of YAML tools out there.  The neatest one, in my opinion, is YAML for .NET, Visual Studio and Powershell.  It includes a great editor plug-in for Visual Studio as well as YamlCore, which is a parsing engine for .Net.  It is in active development still, but it is certainly enough to get you going with YAML in .Net.  Start by referenceing YamlCore.dll, load your document, and you are on your way.  Here is an example of using the parser to get the title of the Recipe: var yaml = YamlLanguage.FileTo("Data.yaml") as Hashtable; var recipe = yaml["Recipe"] as Hashtable; var title = recipe["Title"] as string; In a similar way, you can access data in the Ingredients set: var yaml = YamlLanguage.FileTo("Data.yaml") as Hashtable; var recipe = yaml["Recipe"] as Hashtable; var ingredients = recipe["Ingredients"] as ArrayList; foreach (Hashtable ingredient in ingredients) { var name = ingredient["Name"] as string; } You may have noticed that YamlCore uses non-generic Hashtables and ArrayLists.  This is because YamlCore was designed to work in all .Net versions, including 1.0.  Everything in the parsed tree is one of two things: Hashtable, ArrayList or Value type (usually String).  This translates well to the YAML structure where everything is either a Map, a Set or a Value.  Taking it further Personally, I really dislike writing code like this.  Years ago, I promised myself to never write the words Hashtable or ArrayList in my .Net code again.  They are ugly, mostly depreciated collections that existed before we got generics in C# 2.0.  Now, especially that we have dynamic capabilities in C# 4.0, we can do a lot better than this.  With a relatively small amount of code, you can wrap the Hashtables and Array lists with a dynamic wrapper (wrapper code at the bottom of this post).  The same code can be re-written to look like this: dynamic doc = YamlDoc.Load("Data.yaml"); var title = doc.Recipe.Title; And dynamic doc = YamlDoc.Load("Data.yaml"); foreach (dynamic ingredient in doc.Recipe.Ingredients) { var name = ingredient.Name; } I significantly prefer this code over the previous.  That’s not all… the magic really happens when we take this concept into WPF.  With a single line of code, you can bind to the data dynamically in the view: DataContext = YamlDoc.Load("Data.yaml"); Then, your XAML is extremely straight-forward (Nothing else.  No static types, no adapter code.  Nothing): <StackPanel> <TextBlock Text="{Binding Recipe.Title}" /> <TextBlock Text="{Binding Recipe.Description}" /> <TextBlock Text="{Binding Recipe.Author}" /> <TextBlock Text="{Binding Recipe.TimeToPrepare}" /> <TextBlock Text="Ingredients:" FontWeight="Bold" /> <ItemsControl ItemsSource="{Binding Recipe.Ingredients}" Margin="10,0,0,0"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Quantity}" /> <TextBlock Text=" " /> <TextBlock Text="{Binding Units}" /> <TextBlock Text=" of " /> <TextBlock Text="{Binding Name}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> <TextBlock Text="Steps:" FontWeight="Bold" /> <ItemsControl ItemsSource="{Binding Recipe.Steps}" Margin="10,0,0,0"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Number}" /> <TextBlock Text=": " /> <TextBlock Text="{Binding Description}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> This nifty XAML binding trick only works in WPF, unfortunately.  Silverlight handles binding differently, so they don’t support binding to dynamic objects as of late (March 2010).  This, in my opinion, is a major lacking feature in Silverlight and I really hope we will see this feature available to us in Silverlight 4 Release.  (I am not very optimistic for Silverlight 4, but I can hope for the feature in Silverlight 5, can’t I?) Conclusion I still have a few things I want to say about using YAML in the .Net space including de-serialization and using IronRuby for your YAML parser, but this post is hopefully enough to see how easy it is to incorporate YAML documents in your code. Codeplex Site for YAML tools Dynamic wrapper for YamlCore

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • Binding a select in a client template

    - by Bertrand Le Roy
    I recently got a question on one of my client template posts asking me how to bind a select tag’s value to data in client templates. I was surprised not to find anything on the web addressing the problem, so I thought I’d write a short post about it. It really is very simple once you know where to look. You just need to bind the value property of the select tag, like this: <select sys:value="{binding color}"> If you do it from markup like here, you just need to use the sys: prefix. It just works. Here’s the full source code for my sample page: <!DOCTYPE html> <html> <head> <title>Binding a select tag</title> <script src=http://ajax.microsoft.com/ajax/beta/0911/Start.js type="text/javascript"></script> <script type="text/javascript"> Sys.require(Sys.scripts.Templates, function() { var colors = [ "red", "green", "blue", "cyan", "purple", "yellow" ]; var things = [ { what: "object", color: "blue" }, { what: "entity", color: "purple" }, { what: "thing", color: "green" } ]; Sys.create.dataView("#thingList", { data: things, itemRendered: function(view, ctx) { Sys.create.dataView( Sys.get("#colorSelect", ctx), { data: colors }); } }); }); </script> <style type="text/css"> .sys-template {display: none;} </style> </head> <body xmlns:sys="javascript:Sys"> <div> <ul id="thingList" class="sys-template"> <li> <span sys:id="thingName" sys:style-color="{binding color}" >{{what}}</span> <select sys:id="colorSelect" sys:value="{binding color}" class="sys-template"> <option sys:value="{{$dataItem}}" sys:style-background-color="{{$dataItem}}" >{{$dataItem}}</option> </select> </li> </ul> </div> </body> </html> This produces the following page: Each of the items sees its color change as you select a different color in the drop-down. Other details worth noting in this page are the use of the script loader to get the framework from the CDN, and the sys:style-background-color syntax to bind the background color style property from markup. Of course, I’ve used a fair amount of custom ASP.NET Ajax markup in here, but everything could be done imperatively and with completely clean markup from the itemRendered event using Sys.bind.

    Read the article

  • Know more about shared pool subpool

    - by Liu Maclean(???)
    ????T.askmaclean.com???Shared Pool?SubPool?????,????????_kghdsidx_count ? subpool ??subpool????( ???duration)???: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> set linesize 200 pagesize 1400 SQL> show parameter kgh NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ _kghdsidx_count                      integer                          7 SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11783.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11783.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036110 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x60036110 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f938 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f938 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049160 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049160 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052988 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052988 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c1b0 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c1b0 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659d8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659d8 HEAP DUMP heap name="sga heap(7,0)"  desc=0x6006f200 FIVE LARGEST SUB HEAPS for heap name="sga heap(7,0)"   desc=0x6006f200 SQL> alter system set "_kghdsidx_count"=6 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  859832320 bytes Fixed Size                  2100104 bytes Variable Size             746587256 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 536870914; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_11908.trc [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_11908.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360f0 FIVE LARGEST SUB HEAPS for heap name="sga heap(1,0)"   desc=0x600360f0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f918 FIVE LARGEST SUB HEAPS for heap name="sga heap(2,0)"   desc=0x6003f918 HEAP DUMP heap name="sga heap(3,0)"  desc=0x60049140 FIVE LARGEST SUB HEAPS for heap name="sga heap(3,0)"   desc=0x60049140 HEAP DUMP heap name="sga heap(4,0)"  desc=0x60052968 FIVE LARGEST SUB HEAPS for heap name="sga heap(4,0)"   desc=0x60052968 HEAP DUMP heap name="sga heap(5,0)"  desc=0x6005c190 FIVE LARGEST SUB HEAPS for heap name="sga heap(5,0)"   desc=0x6005c190 HEAP DUMP heap name="sga heap(6,0)"  desc=0x600659b8 FIVE LARGEST SUB HEAPS for heap name="sga heap(6,0)"   desc=0x600659b8 SQL> SQL> alter system set "_kghdsidx_count"=2 scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12003.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12003.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d SQL> alter system set cpu_count=16 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area  851443712 bytes Fixed Size                  2100040 bytes Variable Size             738198712 bytes Database Buffers          104857600 bytes Redo Buffers                6287360 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL>  oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12065.trc [oracle@vrh8 ~]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12065.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x600360b0 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003f8d8 SQL> show parameter sga_target NAME                                 TYPE                             VALUE ------------------------------------ -------------------------------- ------------------------------ sga_target                           big integer                      0 SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> alter system set sga_target=1000M scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> SQL> SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL>  oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12148.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options [oracle@vrh8 dbs]$ grep "sga heap"  /s01/admin/G10R25/udump/g10r25_ora_12148.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(1,1)"  desc=0x60037ee8 HEAP DUMP heap name="sga heap(1,2)"  desc=0x60039740 HEAP DUMP heap name="sga heap(1,3)"  desc=0x6003af98 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 HEAP DUMP heap name="sga heap(2,1)"  desc=0x60041710 HEAP DUMP heap name="sga heap(2,2)"  desc=0x60042f68 _enable_shared_pool_durations:?????????10g????shared pool duration??,?????sga_target?0?????false; ???10.2.0.5??cursor_space_for_time???true??????false,???10.2.0.5??cursor_space_for_time????? SQL> alter system set "_enable_shared_pool_durations"=false scope=spfile; System altered. SQL> SQL> startup force; ORACLE instance started. Total System Global Area 1048576000 bytes Fixed Size                  2101544 bytes Variable Size             738201304 bytes Database Buffers          301989888 bytes Redo Buffers                6283264 bytes Database mounted. Database opened. SQL> oradebug setmypid; Statement processed. SQL> oradebug dump heapdump 2; Statement processed. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_12233.trc SQL> SQL> Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options\ [oracle@vrh8 dbs]$ grep "sga heap"   /s01/admin/G10R25/udump/g10r25_ora_12233.trc HEAP DUMP heap name="sga heap"  desc=0x60000058 HEAP DUMP heap name="sga heap(1,0)"  desc=0x60036690 HEAP DUMP heap name="sga heap(2,0)"  desc=0x6003feb8 ??:1._kghdsidx_count ??? shared pool subpool???, _kghdsidx_count???????7 ??? 7? shared pool subpool 2.??????? subpool???4? sub partition ?: sga heap(1,0) sga heap(1,1) sga heap(1,2) sga heap(1,3) ????? cpu??? ?????_kghdsidx_count, ???? ?10g ?AUTO SGA ??? shared pool duration???, duration ??4?: Session duration Instance duration (never freed) Execution duration (freed fastest) Free memory ??? shared pool duration???? ?10gR1?Shared Pool?shrink??????????,?????????????Buffer Cache???????????granule,????Buffer Cache?granule????granule header?Metadata(???buffer header??RAC??Lock Elements)????,?????????????????????shared pool????????duration(?????)?chunk??????granule?,????????????granule??10gR2????Buffer Cache Granule????????granule header?buffer?Metadata(buffer header?LE)????,??shared pool???duration?chunk????????granule,??????buffer cache?shared pool??????????????10gr2?streams pool?????????(???????streams pool duration????) reference : http://www.oracledatabase12g.com/archives/understanding-automatic-sga-memory-management.html

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • connecting to freenx server : configuration error

    - by Sandeep
    I am not able to pin point what is missing. I have configured freenx-server (useradd, passwd etc). However, server drops the connection after authentication. Please note my server is ubuntu 10.04 and client recent version. Below is the error log NX> 203 NXSSH running with pid: 6016 NX> 285 Enabling check on switch command NX> 285 Enabling skip of SSH config files NX> 285 Setting the preferred NX options NX> 200 Connected to address: 192.168.2.2 on port: 22 NX> 202 Authenticating user: nx NX> 208 Using auth method: publickey HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0) NX> 105 hello NXCLIENT - Version 3.2.0 NX> 134 Accepted protocol: 3.2.0 NX> 105 SET SHELL_MODE SHELL NX> 105 SET AUTH_MODE PASSWORD NX> 105 login NX> 101 User: sandeep NX> 102 Password: NX> 103 Welcome to: ubuntu user: sandeep NX> 105 listsession --user="sandeep" --status="suspended,running" --geometry="1366x768x24+render" --type="unix-gnome" NX> 127 Sessions list of user 'sandeep' for reconnect: Display Type Session ID Options Depth Screen Status Session Name ------- ---------------- -------------------------------- -------- ----- -------------- ----------- ------------------------------ NX> 148 Server capacity: not reached for user: sandeep NX> 105 startsession --link="lan" --backingstore="1" --encryption="1" --cache="16M" --images="64M" --shmem="1" --shpix="1" --strict="0" --composite="1" --media="0" --session="t" --type="unix-gnome" --geometry="1366x768" --client="linux" --keyboard="pc105/us" --screeninfo="1366x768x24+render" ssh_exchange_identification: Connection closed by remote host NX> 280 Exiting on signal: 15

    Read the article

  • What do I do when I get a Linux kernel bug?

    - by raldi
    I just bought a tiny computer called a fit-pc2 which came with a somewhat customized Ubuntu 9.10 installation. uname -a reports: Linux 2.6.31-34-fitpc2 #7 SMP Thu Apr 22 17:43:26 IDT 2010 i686 GNU/Linux It seems that after several hours of running with heavy network load, all networking ceases and I get the following in kern.log: BUG: unable to handle kernel paging request at ff09dfc0 IP: [<c0150300>] kthread_should_stop+0x10/0x20 *pde = 00000000 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0150300>] EFLAGS: 00010246 CPU: 1 EIP is at kthread_should_stop+0x10/0x20 EAX: ff09dfc4 EBX: c180cbac ECX: 0109d000 EDX: f709df98 ESI: f709df98 EDI: c180cba0 EBP: f709dfb8 ESP: f709df90 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: c014c14d c180cba4 00000000 f7084b60 c0150770 f709dfa4 f709dfa4 f7023ef4 <0> c180cba0 c014c0d0 f709dfe0 c015047c 00000000 00000000 00000000 f709dfcc <0> f709dfcc c0150400 00000000 00000000 00000000 c0103ce7 f7023ef4 00000000 Call Trace: [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: a6 8b 55 0c 8d 4d e0 89 f8 89 34 24 e8 7a fd ff ff 89 c3 eb 92 90 90 90 90 90 90 55 64 a1 00 80 76 c0 8b 80 70 02 00 00 89 e5 5d <8b> 40 fc c3 8d b6 00 00 00 00 8d bf 00 00 00 00 55 ba d7 86 62 EIP: [<c0150300>] kthread_should_stop+0x10/0x20 SS:ESP 0068:f709df90 CR2: 00000000ff09dfc0 ---[ end trace 06004df70b9cf435 ]--- BUG: unable to handle kernel paging request at ff09dfc8 IP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 *pde = 00000000 Oops: 0002 [#2] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G D C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0521bc8>] EFLAGS: 00010086 CPU: 1 EIP is at _spin_lock_irqsave+0x18/0x30 EAX: 00000100 EBX: ff09dfc8 ECX: 00000286 EDX: ff09dfc8 ESI: f7084b60 EDI: ff09dfc4 EBP: f709dd88 ESP: f709dd88 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: f709dda4 c0127c0b 00000082 00000001 ff09dfc4 f7084b60 00000000 f709ddd0 <0> c0137fd2 00000086 f70954c4 00000000 f7098480 f709ddf0 f7094fc0 f7084b60 <0> 00000000 00000009 f709ddf0 c013c3f8 00000001 c1807c60 f709ddf0 f7084b60 Call Trace: [<c0127c0b>] ? complete+0x1b/0x60 [<c0137fd2>] ? mm_release+0x52/0xf0 [<c013c3f8>] ? exit_mm+0x18/0x110 [<c013c6db>] ? do_exit+0xfb/0x2e0 [<c013998a>] ? print_oops_end_marker+0x2a/0x30 [<c0522aab>] ? oops_end+0x8b/0xd0 [<c011eac4>] ? no_context+0xb4/0xd0 [<c011eb1d>] ? __bad_area_nosemaphore+0x3d/0x1a0 [<c0133a56>] ? load_balance_newidle+0x96/0x320 [<c011ec92>] ? bad_area_nosemaphore+0x12/0x20 [<c0524106>] ? do_page_fault+0x2f6/0x380 [<c012cc30>] ? finish_task_switch+0x50/0xe0 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0522006>] ? error_code+0x66/0x70 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0150300>] ? kthread_should_stop+0x10/0x20 [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 55 89 e5 f0 83 28 01 79 05 e8 02 ff ff ff 5d c3 55 89 c2 89 e5 9c 58 8d 74 26 00 89 c1 fa 90 8d 74 26 00 b8 00 01 00 00 <f0> 66 0f c1 02 38 e0 74 06 f3 90 8a 02 eb f6 89 c8 5d c3 90 8d EIP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 SS:ESP 0068:f709dd88 CR2: 00000000ff09dfc8 ---[ end trace 06004df70b9cf436 ]--- Fixing recursive fault but reboot is needed! This seems to happen at least once a day. How do I even begin to debug this?

    Read the article

  • What do I do when I get a Linux kernel bug?

    - by raldi
    I just bought a tiny computer called a fit-pc2 which came with a somewhat customized Ubuntu 9.10 installation. uname -a reports: Linux 2.6.31-34-fitpc2 #7 SMP Thu Apr 22 17:43:26 IDT 2010 i686 GNU/Linux It seems that after several hours of running with heavy network load, all networking ceases and I get the following in kern.log: BUG: unable to handle kernel paging request at ff09dfc0 IP: [<c0150300>] kthread_should_stop+0x10/0x20 *pde = 00000000 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0150300>] EFLAGS: 00010246 CPU: 1 EIP is at kthread_should_stop+0x10/0x20 EAX: ff09dfc4 EBX: c180cbac ECX: 0109d000 EDX: f709df98 ESI: f709df98 EDI: c180cba0 EBP: f709dfb8 ESP: f709df90 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: c014c14d c180cba4 00000000 f7084b60 c0150770 f709dfa4 f709dfa4 f7023ef4 <0> c180cba0 c014c0d0 f709dfe0 c015047c 00000000 00000000 00000000 f709dfcc <0> f709dfcc c0150400 00000000 00000000 00000000 c0103ce7 f7023ef4 00000000 Call Trace: [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: a6 8b 55 0c 8d 4d e0 89 f8 89 34 24 e8 7a fd ff ff 89 c3 eb 92 90 90 90 90 90 90 55 64 a1 00 80 76 c0 8b 80 70 02 00 00 89 e5 5d <8b> 40 fc c3 8d b6 00 00 00 00 8d bf 00 00 00 00 55 ba d7 86 62 EIP: [<c0150300>] kthread_should_stop+0x10/0x20 SS:ESP 0068:f709df90 CR2: 00000000ff09dfc0 ---[ end trace 06004df70b9cf435 ]--- BUG: unable to handle kernel paging request at ff09dfc8 IP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 *pde = 00000000 Oops: 0002 [#2] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G D C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0521bc8>] EFLAGS: 00010086 CPU: 1 EIP is at _spin_lock_irqsave+0x18/0x30 EAX: 00000100 EBX: ff09dfc8 ECX: 00000286 EDX: ff09dfc8 ESI: f7084b60 EDI: ff09dfc4 EBP: f709dd88 ESP: f709dd88 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: f709dda4 c0127c0b 00000082 00000001 ff09dfc4 f7084b60 00000000 f709ddd0 <0> c0137fd2 00000086 f70954c4 00000000 f7098480 f709ddf0 f7094fc0 f7084b60 <0> 00000000 00000009 f709ddf0 c013c3f8 00000001 c1807c60 f709ddf0 f7084b60 Call Trace: [<c0127c0b>] ? complete+0x1b/0x60 [<c0137fd2>] ? mm_release+0x52/0xf0 [<c013c3f8>] ? exit_mm+0x18/0x110 [<c013c6db>] ? do_exit+0xfb/0x2e0 [<c013998a>] ? print_oops_end_marker+0x2a/0x30 [<c0522aab>] ? oops_end+0x8b/0xd0 [<c011eac4>] ? no_context+0xb4/0xd0 [<c011eb1d>] ? __bad_area_nosemaphore+0x3d/0x1a0 [<c0133a56>] ? load_balance_newidle+0x96/0x320 [<c011ec92>] ? bad_area_nosemaphore+0x12/0x20 [<c0524106>] ? do_page_fault+0x2f6/0x380 [<c012cc30>] ? finish_task_switch+0x50/0xe0 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0522006>] ? error_code+0x66/0x70 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0150300>] ? kthread_should_stop+0x10/0x20 [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 55 89 e5 f0 83 28 01 79 05 e8 02 ff ff ff 5d c3 55 89 c2 89 e5 9c 58 8d 74 26 00 89 c1 fa 90 8d 74 26 00 b8 00 01 00 00 <f0> 66 0f c1 02 38 e0 74 06 f3 90 8a 02 eb f6 89 c8 5d c3 90 8d EIP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 SS:ESP 0068:f709dd88 CR2: 00000000ff09dfc8 ---[ end trace 06004df70b9cf436 ]--- Fixing recursive fault but reboot is needed! This seems to happen at least once a day. How do I even begin to debug this?

    Read the article

  • Approach on Software Development Architecture

    - by john ryan
    Hi i am planning to standardize our way of creating project for our new projects. Currently we are using 3tier architecture where we have our ClassLibrary Project where it includes our Data Access Layer and Business Layer Something like: Solution ClassLibrary >ClassLibrary Project : >DAL(folder) > DAL Classes >BAL(folder) > BAL Classes And this Class Library dll was reference on our presentation Layer Project which are the Application(web/desktop) Something like: Solution WebUniversitySystem >Libraries(folder) > ClassLibrary.dll >WebUniversitySystem(Project): >Reference ClassLibrary.dll >Pages etc... Now i am planning to do is something like: Solution WebUniversitySystem >DataAccess(Project) >BusinesLayer(Project) >Reference DAL >WebUniversitySystem(Project): >Reference BAL >Pages etc... Is this Ok ? Or there is a good Approach that we can follow? Thanks In Regards

    Read the article

  • Allow email from a particular sender through spam filter

    - by Greg
    We are running exchange 2010 and are using the built in anti-spam feature. We have set up Content Filtering, IP Block List Providers, Sender ID, Sender Reputation and it filters out most of the junk but it also quarantines all emails from one of our customers. It is being quarantined because of the Content Filter agent (Report Below). How can I add an exception for this email address to the Content Filter. I can see how to setup an exception for a delivery address ("Don't filter messages sent TO the following recipients") but I want to add [email protected] to our safe list. I don't want to add the whole domain as it is a very popular ISP in Australia and we often get junk from them. Filter Report: > Diagnostic information for administrators: > > Generating server: something.com > > [email protected] > #550 5.2.1 Content Filter agent quarantined this message ## > > Original message headers: > > Received: from icp-osb-irony-out4.external.iinet.net.au (203.59.1.220) > by server.local.something.com.au (192.5.0.105) with Microsoft SMTP > Server id > 14.1.218.12; Mon, 5 Nov 2012 02:40:40 +1100 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: > AscOALeLllB8qwLw/2dsb2JhbABEKYUFhiigRQOWCwQEgQiBCIIZFAEBTiwCCAIBBwEIFDkBBBoqARoCAQIDAYd4uEuRXGEDiCWFT44UijeDAw > X-IronPort-AV: E=Sophos;i="4.80,710,1344182400"; > d="scan'208,217";a="55137861" Received: from unknown (HELO > asdf83c05c53a3) ([124.171.2.240]) by icp-osb-irony-out4.iinet.net.au > with ESMTP; 04 Nov 2012 23:40:26 +0800 Message-ID: > <E8C866D0299E4BCB8B156723893EB735@asdf83c05c53a3> From: Customer > <[email protected]> To: 'Person' <[email protected]> > Subject: A long sentance Date: Mon, 5 Nov 2011 06:07:57 +1100 > MIME-Version: 1.0 Content-Type: multipart/alternative; > boundary="----=_NextPart_000_0005_01C5F962.3CD09120" X-Priority: 3 > X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express > 6.00.2900.5931 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Return-Path: [email protected] Received-SPF: None > (server.local.something.com.au: [email protected] does not > designate permitted sender hosts)

    Read the article

  • How to hide subfolder when using Web.config for subdomains?

    - by mc-kay
    I have FTP access to my ASP.NET Websapce (IIS 7) and I route subdomains with a Web.config in the web root folder. She looks like this: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="route www and emtpy requests" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^(www.)?example.com" /> <add input="{PATH_INFO}" pattern="^/www/" negate="true" /> </conditions> <action type="Rewrite" url="\www\{R:0}" /> </rule> <rule name="route to blog" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^blog.example.com$" /> <add input="{PATH_INFO}" pattern="^/blog/" negate="true" /> </conditions> <action type="Rewrite" url="\blog\{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> As you can see i have two folders in my root directory: "www" and "blog". When i now enter "blog.example.com" everythink is working fine, but when i click a link i will go to "blog.example.com/blog" What can I do to prevent this behavior ?

    Read the article

  • How can I read pcap files in a friendly format?

    - by Tony
    a simple cat on the pcap file looks terrible: $cat tcp_dump.pcap ?ò????YVJ? JJ ?@@.?E<??@@ ?CA??qe?U?????h? .Ceh?YVJ?? JJ ?@@.?E<??@@ CA??qe?U?????z? .ChV?YVJ$?JJ ?@@.?E<-/@@A?CA??9????F???A&? .Ck??YVJgeJJ@@.??#3E<@3{n??9CA??P???F???<K? ??`.Ck??YVJgeBB ?@@.?E4-0@@AFCA??9????F?P????? .Ck???`?YVJ?""@@.??#3E?L@3?I??9CA??P???F????? ???.Ck?220-rly-da03.mx etc. I tried to make it prettier with: sudo tcpdump -ttttnnr tcp_dump.pcap reading from file tcp_dump.pcap, link-type EN10MB (Ethernet) 2009-07-09 20:57:40.819734 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776168808 0,nop,wscale 5> 2009-07-09 20:57:43.819905 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776169558 0,nop,wscale 5> 2009-07-09 20:57:47.248100 IP 67.23.28.65.42385 > 205.188.159.57.25: S 2644526720:2644526720(0) win 5840 <mss 1460,sackOK,timestamp 776170415 0,nop,wscale 5> 2009-07-09 20:57:47.288103 IP 205.188.159.57.25 > 67.23.28.65.42385: S 1358829769:1358829769(0) ack 2644526721 win 5792 <mss 1460,sackOK,timestamp 4292123488 776170415,nop,wscale 2> 2009-07-09 20:57:47.288103 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 1 win 183 <nop,nop,timestamp 776170425 4292123488> 2009-07-09 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: P 1:481(480) ack 1 win 1448 <nop,nop,timestamp 4292123568 776170425> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: P 1:18(17) ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: . ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: P 481:536(55) ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 67.23.28.65.42385 > 205.188.159.57.25: P 18:44(26) ack 536 win 216 <nop,nop,timestamp 776170454 4292123606> 2009-07-09 20:57:47.444112 IP 205.188.159.57.25 > 67.23.28.65.42385: P 536:581(45) ack 44 win 1448 <nop,nop,timestamp 4292123644 776170454> 2009-07-09 20:57:47.484114 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 581 win 216 <nop,nop,timestamp 776170474 4292123644> 2009-07-09 20:57:47.616121 IP 67.23.28.65.42385 > 205.188.159.57.25: P 44:50(6) ack 581 win 216 <nop,nop,timestamp 776170507 4292123644> 2009-07-09 20:57:47.652123 IP 205.188.159.57.25 > 67.23.28.65.42385: P 581:589(8) ack 50 win 1448 <nop,nop,timestamp 4292123855 776170507> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: P 50:56(6) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: F 56:56(0) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.668124 IP 67.23.28.65.49239 > 216.239.113.101.25: S 2642380481:2642380481(0) win 5840 <mss 1460,sackOK,timestamp 776170520 0,nop,wscale 5> 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: P 589:618(29) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: F 618:618(0) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 Well...that is much prettier but it doesn't show the actual messages. I can actually extract more information just viewing the RAW file. What is the best ( and preferably easiest) way to just view all the contents of the pcap file? UPDATE Thanks to the responses below, I made some progress. Here is what it looks like now: tcpdump -qns 0 -A -r blah.pcap 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: tcp 480 0x0000: 4500 0214 834c 4000 3306 f649 cdbc 9f39 [email protected] 0x0010: 4317 1c41 0019 a591 50fe 18ca 9da0 4681 C..A....P.....F. 0x0020: 8018 05a8 848f 0000 0101 080a ffd4 9bb0 ................ 0x0030: 2e43 6bb9 3232 302d 726c 792d 6461 3033 .Ck.220-rly-da03 0x0040: 2e6d 782e 616f 6c2e 636f 6d20 4553 4d54 .mx.aol.com.ESMT 0x0050: 5020 6d61 696c 5f72 656c 6179 5f69 6e2d P.mail_relay_in- 0x0060: 6461 3033 2e34 3b20 5468 752c 2030 3920 da03.4;.Thu,.09. 0x0070: 4a75 6c20 3230 3039 2031 363a 3537 3a34 Jul.2009.16:57:4 0x0080: 3720 2d30 3430 300d 0a32 3230 2d41 6d65 7.-0400..220-Ame 0x0090: 7269 6361 204f 6e6c 696e 6520 2841 4f4c rica.Online.(AOL 0x00a0: 2920 616e 6420 6974 7320 6166 6669 6c69 ).and.its.affili 0x00b0: 6174 6564 2063 6f6d 7061 6e69 6573 2064 ated.companies.d etc. This looks good, but it still makes the actual message on the right difficult to read. Is there a way to view those messages in a more friendly way? UPDATE This made it pretty: tcpick -C -yP -r tcp_dump.pcap Thanks!

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 2: The Service

    - by Your DisplayName here!
    OK – so let’s first start with a simple WCF service and connect that to ADFS 2 for authentication. The service itself simply echoes back the user’s claims – just so we can make sure it actually works and to see how the ADFS 2 issuance rules emit claims for the service: [ServiceContract(Namespace = "urn:leastprivilege:samples")] public interface IService {     [OperationContract]     List<ViewClaim> GetClaims(); } public class Service : IService {     public List<ViewClaim> GetClaims()     {         var id = Thread.CurrentPrincipal.Identity as IClaimsIdentity;         return (from c in id.Claims                 select new ViewClaim                 {                     ClaimType = c.ClaimType,                     Value = c.Value,                     Issuer = c.Issuer,                     OriginalIssuer = c.OriginalIssuer                 }).ToList();     } } The ViewClaim data contract is simply a DTO that holds the claim information. Next is the WCF configuration – let’s have a look step by step. First I mapped all my http based services to the federation binding. This is achieved by using .NET 4.0’s protocol mapping feature (this can be also done the 3.x way – but in that scenario all services will be federated): <protocolMapping>   <add scheme="http" binding="ws2007FederationHttpBinding" /> </protocolMapping> Next, I provide a standard configuration for the federation binding: <bindings>   <ws2007FederationHttpBinding>     <binding>       <security mode="TransportWithMessageCredential">         <message establishSecurityContext="false">           <issuerMetadata address="https://server/adfs/services/trust/mex" />         </message>       </security>     </binding>   </ws2007FederationHttpBinding> </bindings> This binding points to our ADFS 2 installation metadata endpoint. This is all that is needed for svcutil (aka “Add Service Reference”) to generate the required client configuration. I also chose mixed mode security (SSL + basic message credential) for best performance. This binding also disables session – you can control that via the establishSecurityContext setting on the binding. This has its pros and cons. Something for a separate blog post, I guess. Next, the behavior section adds support for metadata and WIF: <behaviors>   <serviceBehaviors>     <behavior>       <serviceMetadata httpsGetEnabled="true" />       <federatedServiceHostConfiguration />     </behavior>   </serviceBehaviors> </behaviors> The next step is to add the WIF specific configuration (in <microsoft.identityModel />). First we need to specify the key material that we will use to decrypt the incoming tokens. This is optional for web applications but for web services you need to protect the proof key – so this is mandatory (at least for symmetric proof keys, which is the default): <serviceCertificate>   <certificateReference storeLocation="LocalMachine"                         storeName="My"                         x509FindType="FindBySubjectDistinguishedName"                         findValue="CN=Service" /> </serviceCertificate> You also have to specify which incoming tokens you trust. This is accomplished by registering the thumbprint of the signing keys you want to accept. You get this information from the signing certificate configured in ADFS 2: <issuerNameRegistry type="...ConfigurationBasedIssuerNameRegistry">   <trustedIssuers>     <add thumbprint="d1 … db"           name="ADFS" />   </trustedIssuers> </issuerNameRegistry> The last step (promised) is to add the allowed audience URIs to the configuration – WCF clients use (by default – and we’ll come back to this) the endpoint address of the service: <audienceUris>   <add value="https://machine/soapadfs/service.svc" /> </audienceUris> OK – that’s it – now we have a basic WCF service that uses ADFS 2 for authentication. The next step will be to set-up ADFS to issue tokens for this service. Afterwards we can explore various options on how to use this service from a client. Stay tuned… (if you want to have a look at the full source code or peek at the upcoming parts – you can download the complete solution here)

    Read the article

  • need advice for small css issue [migrated]

    - by JaPerk14
    I have a small design problem in my css, and I'd like to know if someone could check it out for me. The design problem is in the rollover effect of my horizontal navigation. There seems to be some sort of added margin or padding, but I'm having trouble finding the problem in the css. I will paste the code I'm using below, so you can see for yourself. You won't be able to see the problem until you rollover the navigation list items. HTML: <div class="Horiznav"> <ul> <li id="active"><a href="#">Link #1</a></li> <li><a href="#">Link #2</a></li> <li><a href="#">Link #3</a></li> <li><a href="#">Link #4</a></li> <li><a href="#">Link #5</a></li> </ul> </div> CSS: .Horiznav { background: #1F00CA; border-top: solid 1px #fff; border-bottom: solid 1px #fff; } .Horiznav ul { font-family: Arial, Helvetica, sans-serif; font-weight: bold; color: #fff; text-Align: center; margin: 0; padding-top: 5px; padding-bottom: 5px; } .Horiznav ul li { display: inline; } .Horiznav ul li a { padding-top: 5px; padding-bottom: 5px; color: #fff; text-decoration: none; border-right: 1px solid #fff; } .Horiznav ul li a:hover { background: #16008D; color: #fff; } #active a { border-left: 1px solid #fff; }

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >