Search Results

Search found 6542 results on 262 pages for 'undocumented behavior'.

Page 34/262 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Query Execution Failed in Reporting Services reports

    - by Chris Herring
    I have some reporting services reports that talk to Analysis Services and at times they fail with the following error: An error occurred during client rendering. An error has occurred during report processing. Query execution failed for dataset 'AccountManagerAccountManager'. The connection cannot be used while an XmlReader object is open. This occurs sometimes when I change selections in the filter. It also occurs when the machine has been under heavy load and then will consistently error until SSAS is restarted. The log file contains the following error: processing!ReportServer_0-18!738!04/06/2010-11:01:14:: e ERROR: Throwing Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'., ; Info: Microsoft.ReportingServices.ReportProcessing.ReportProcessingException: Query execution failed for dataset 'AccountManagerAccountManager'. ---> System.InvalidOperationException: The connection cannot be used while an XmlReader object is open. at Microsoft.AnalysisServices.AdomdClient.XmlaClient.CheckConnection() at Microsoft.AnalysisServices.AdomdClient.XmlaClient.ExecuteStatement(String statement, IDictionary connectionProperties, IDictionary commandProperties, IDataParameterCollection parameters, Boolean isMdx) at Microsoft.AnalysisServices.AdomdClient.AdomdConnection.XmlaClientProvider.Microsoft.AnalysisServices.AdomdClient.IExecuteProvider.ExecuteTabular(CommandBehavior behavior, ICommandContentProvider contentProvider, AdomdPropertyCollection commandProperties, IDataParameterCollection parameters) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.AnalysisServices.AdomdClient.AdomdCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.DataExtensions.AdoMdCommand.ExecuteReader(CommandBehavior behavior) at Microsoft.ReportingServices.OnDemandProcessing.RuntimeDataSet.RunDataSetQuery() Can anyone shed light on this issue?

    Read the article

  • PHP-FPM processes holding onto MongoDB connection states

    - by Brendan
    For the relevant part of our server stack, we're running: NGINX 1.2.3 PHP-FPM 5.3.10 with PECL mongo 1.2.12 MongoDB 2.0.7 CentOS 6.2 We're getting some strange, but predictable behavior when the MongoDB server goes away (crashes, gets killed, etc). Even with a try/catch block around the connection code, i.e: try { $mdb = new Mongo('mongodb://localhost:27017'); } catch (MongoConnectionException $e) { die( $e->getMessage() ); } $db = $mdb->selectDB('collection_name'); Depending on which PHP-FPM workers have connected to mongo already, the connection state is cached, causing further exceptions to go unhandled, because the $mdb connection handler can't be used. The troubling thing is that the try does not consistently fail for a considerable amount of time, up to 15 minutes later, when -- I assume -- the php-fpm processes die/respawn. Essentially, the behavior is that when you hit a worker that hasn't connected to mongo yet, you get the die message above, and when you connect to a worker that has, you get an unhandled exception from $mdb->selectDB('collection_name'); because catch does not run. When PHP is a single process, i.e. via Apache with mod_php, this behavior does not occur. Just for posterity, going back to Apache/mod_php is not an option for us at this time. Is there a way to fix this behavior? I don't want the connection state to be inconsistent between different php-fpm processes.

    Read the article

  • Defining Your Online Segmentation and Targeting Strategy

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A lot of times, companies will put online segmentation and targeting on the back burner because they don’t know where to start. Often, I’ve heard web managers say that their segments aren’t well understood yet, so they can’t really deliver personalized online experiences that are meaningful. This lack of complete understanding means that they don't really bother to try. But, I don’t think you necessarily need to have an elaborate segmentation and targeting strategy already in place to start delivering a more relevant online customer experience. Sometimes it helps to think of how segmentation and targeting might solve some of the challenges your sites visitors are currently experiencing on your web presence, rather than doing nothing and waiting until a fully baked segmentation strategy lands in your inbox.  For example, perhaps you have a broad and varied service offering that makes it difficult for site visitors to easily find the solutions that are most relevant for them.  How can segmentation and targeting help solve this problem?  Or maybe it’s like the airline I described in Monday’s post where the special deals featured on the home page are only relevant to site visitors from a couple of cities.  Couldn’t segmentation and targeting help them to highlight offers on their home page that are relevant to a larger share of their site visitors? Your early segmentation and targeting efforts do not need to be complicated.  There are simple ways to start delivering a more relevant online customer experience, even if you’re dealing with anonymous site visitors.  These include targeting content to site visitors based on: Referral: Deliver targeted content to your site visitors that is based on where they came from or the search term they used to find your site Behavior:  Deliver content to your site visitors that is related or similar to content they’ve clicked on already Location:  Deliver content your site visitors that is most relevant for their geographic location (this would solve that pesky airline home page problem described above) So as you can see, there really are some very simple ways in which you can start improving your online customer experience using very basic segmentation and targeting methods.  One thing to keep in mind as you start to define you segmentation and targeting strategy is that there are many different types of attributes or combinations of attributes upon which you can base your segmentation and targeting strategy.  In addition to referral, behavior and location, other attributes that you should consider are: Profile Information:  What profile information do you know about this customer already?  Perhaps they provided some information on their interests and preferences when they first registered with your site. Time:  What time is it and how does that impact what my site visitors are looking for or trying to do? Demographics: What are my site visitors’ ages, incomes or ethnicities? Which attributes you select to include in your segmentation strategy will depend on your unique business needs and objectives.  Attributes such as behavior or referral may not be the most important targeting criteria depending on your situation. For example, if you’re a newspaper you might know that certain visitors are sports fans based on their profile information.  You can create a segment for sports fans and target sports related content to that segment of your readership online.  Or perhaps, a reader is browsing stories that are related to politics; you can use that visitor’s behavior to assign him or her to a segment for those interested in politics. From there you can recommend more stories to that visitor based on their interest in politics. For an airline, the visitor’s location may be a more important attribute. By detecting the visitor’s location, you can assign them to an appropriate segment and then target special flights and offers to them based on their likely departure airport. As you can see, there are many practical ways that you can start improving the experience your customers receive on your web presence using fairly basic segmentation and targeting techniques. If you want to learn more about segmentation and targeting using Oracle’s web experience management solution, check out this helpful video that demonstrates these powerful capabilities in Oracle WebCenter Sites. ***** On Demand Webcast Featuring Brian Solis of Altimeter Group Trends such as the mobile web, social media, gamification and real-time are changing customer behavior and expectations. In this new environment, many businesses will struggle. Some will fall by the wayside, while others learn to adapt and thrive. Watch this on demand webcast with Altimeter Group digital analyst and author, Brian Solis, and discover what your organization needs to know about how to compete in the new era of Digital Darwinism. View now.

    Read the article

  • BizTalk &ndash; Routing failure on Delivery Notifications (BizTalk 2006 R2 to 2013)

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/11/11/biztalk-routing-failure-on-delivery-notifications.aspxThis is a detailed explanation of a something I posted a few month ago on stackoverflow, concerning a weird behavior (a bug, really…) of the delivery notifications in BizTalk. Reminder: what are delivery notifications Mechanism BizTalk has the ability to automatically publish positive acknowledgments (ACK) when it has succeeded transmitting a message or negative acknowledgments (NACK) in case of a transmission failure. Orchestrations can use delivery notifications to subscribe to those ACKs and NACKs in order to know if a message sent on a one-way send port has been successfully transmitted. Delivery Notifications can be “activated” in two ways: The most common and easy way is to set the Delivery Notification property of a logical send port (in the orchestration designer) to Transmitted: Another way is to set the BTS.AckRequired context property of the message to be sent to true: NOTE: fundamentally, those methods are strictly equivalent since the fact of setting the Delivery Notification to Transmitted on the send port only tells BizTalk the BTS.AckRequired context property has to be set to true on the outgoing message. Related context properties ACKs and NACKs have a common set of propoted context properties, which are : Propriété Description AckType Equals ACK when successful or NACK otherwise AckID MessageID of the message concerned by the acknowledgment AckOwnerID InstanceID of the instance associated with the acknowledgment AckSendPortID ID of the send port AckSendPortName Name of the send port AckOutboundTransportLocation URI of the send port AckReceivePortID ID of the port the message came from AckReceivePortName Name of the port the message came from AckInboundTransportLocation URI of the port the message came from Detailed behavior The way Delivery Notifications are handled by BizTalk is peculiar compared to the standard behavior of the Message Box: if no active subscription exists for the acknowledgment, it is simply discarded. The direct consequence of this is that there can be no routing failure for an acknowledgment, and an acknowledgment cannot be suspended. Moreover, when a message is sent to a send port where Delivery Notification = Transmitted, a correlation set is initialized and a correlation token is attached to the message (Context property: CorrelationToken). This correlation token will also be attached to the acknowledgment. So when the acknowledgment is issued, it is automatically routed to the source orchestration. Finally, when a NACK is received by the source orchestration, a DeliveryFailureException is thrown, which can be caught in Catch section. Context of the problem Consider this scenario: In an orchestration, Delivery Notifications are activated on a One-Way send port In case of a transmission failure, the messaging instance is suspended and the orchestration catches an exception (DeliveryFailureException). When the exception is caught, the orchestration does some logging and then terminates (thanks to a Terminate shape). So that leaves only the suspended messaging instance, waiting to be resumed. Symptoms Once the problem that caused the transmission failure is solved, the messaging instance is resumed. Considering what was said in the reminder, we would expect the instance to complete, leaving no active or suspended instance. Nevertheless, the result is that the messaging instance is once more suspended, this time because of a routing failure: The routing failure report shows that the suspended message has the following attached properties: Explanation Those properties clearly indicate that the message being suspended is an acknowledgment (ACK in this case), which was published in the message box and was supended because no subscribers were found. This makes sense, since the source orchestration was terminated before we resumed the messaging instance. So its subscription to the acknowledgments was no longer active when the ACK was published, which explains the routing failure. But this behavior is in direct contradiction with what was said earlier: an acknowledgment must be discarded when no subscriber is found and therefore should not be suspended. Cause It is indeed an outright bug, which appeared with the SP1 of BizTalk 2006 R2 and was never corrected since then: not in the next 4 CUs, not in BizTalk 2009, not in 2010 and not event in 2013 – though I haven’t tested CU1 and CU2 for this last edition, but I bet there is nothing to be expected from those CUs (on this particular point). Side effects This bug can have pretty nasty side effects: this behavior can be propagated to other ports, due to routing mechanisms. For instance: you have configured the ESB Toolkit and have activated the “Enable routing failure for failed messages”. The result will be that the ESB Exception SQL send port will also try and publish ACKs or NACKs concerning its own messaging instances. In itself, this is already messy, but remember that those acknowledgments will also have the source correlation token attached to them… See how far it goes? Well, actually there is more: in SQL send ports, transactions will be rolled back because of the routing failure (I guess it also happens with other adapters - like Oracle, but I haven’t tested them). Again, think of what happens when the send port is the ESB Exception send port: your BizTalk box is going mad, but you have no idea since no exception can be written in the exception database! All of this can be tricky to diagnose, I can tell you that… Solution There is no real solution, only a work-around, but it won’t solve all of the problems and side effects. The idea is to create an orchestration which subscribes to all acknowledgments. That is to say: The message type of the incoming message will be XmlDocument The BTS.AckType property exists The logical receive port will use direct binding By doing so, all acknowledgments will be consumed by an instance of this orchestration, thus avoiding the routing failure. Here is an example of what this orchestration could look like: In order not to pollute the HAT and the DTA Db (after all, this orchestration is only meant to be a palliative to some faulty internal BizTalk mechanism, so there should be no trace of its execution), all tracking must be deactivated:

    Read the article

  • C# WCF - Failed to invoke the service.

    - by Keith Barrows
    I am getting the following error when trying to use the WCF Test Client to hit my new web service. What is weird is every once in awhile it will execute once then start popping this error. Failed to invoke the service. Possible causes: The service is offline or inaccessible; the client-side configuration does not match the proxy; the existing proxy is invalid. Refer to the stack trace for more detail. You can try to recover by starting a new proxy, restoring to default configuration, or refreshing the service. My code (interface): [ServiceContract(Namespace = "http://rivworks.com/Services/2010/04/19")] public interface ISync { [OperationContract] bool Execute(long ClientID); } My code (class): public class Sync : ISync { #region ISync Members bool ISync.Execute(long ClientID) { return model.Product(ClientID); } #endregion } My config (EDIT - posted entire serviceModel section): <system.serviceModel> <diagnostics performanceCounters="Default"> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <serviceHostingEnvironment aspNetCompatibilityEnabled="false" /> <behaviors> <endpointBehaviors> <behavior name="JsonpServiceBehavior"> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="SimpleServiceBehavior"> <serviceMetadata httpGetEnabled="True" policyVersion="Policy15"/> </behavior> <behavior name="RivWorks.Web.Service.ServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <services> <service name="RivWorks.Web.Service.NegotiateService" behaviorConfiguration="SimpleServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="jsonpBinding" behaviorConfiguration="JsonpServiceBehavior" contract="RivWorks.Web.Service.NegotiateService" /> <!--<host> <baseAddresses> <add baseAddress="http://kab.rivworks.com/services"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.NegotiateService" />--> <endpoint address="mex" binding="mexHttpBinding" contract="RivWorks.Web.Service.NegotiateService" /> </service> <service name="RivWorks.Web.Service.Sync" behaviorConfiguration="RivWorks.Web.Service.ServiceBehavior"> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.ISync" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <extensions> <bindingElementExtensions> <add name="jsonpMessageEncoding" type="RivWorks.Web.Service.JSONPBindingExtension, RivWorks.Web.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bindingElementExtensions> </extensions> <bindings> <customBinding> <binding name="jsonpBinding" > <jsonpMessageEncoding /> <httpTransport manualAddressing="true"/> </binding> </customBinding> </bindings> </system.serviceModel> 2 questions: What am I missing that causes this error? How can I increase the time out for the service? TIA!

    Read the article

  • Combined SOAP/JSON/XML in WCF, using UriTemplate

    - by gregmac
    I'm trying to build a generic web service interface using WCF, to allow 3rd party developers to hook into our software. After much struggling and reading (this question helped a lot), I finally got SOAP, JSON and XML (POX) working together. To simplify, here's my code (to make this example simple, I'm not using interfaces -- I did try this both ways): <ServiceContract()> _ Public Class TestService Public Sub New() End Sub <OperationContract()> _ <WebGet()> _ Public Function GetDate() As DateTime Return Now End Function '<WebGet(UriTemplate:="getdateoffset/{numDays}")> _ <OperationContract()> _ Public Function GetDateOffset(ByVal numDays As Integer) As DateTime Return Now.AddDays(numDays) End Function End Class and the web.config code: <services> <service name="TestService" behaviorConfiguration="TestServiceBehavior"> <endpoint address="soap" binding="basicHttpBinding" contract="TestService"/> <endpoint address="json" binding="webHttpBinding" behaviorConfiguration="jsonBehavior" contract="TestService"/> <endpoint address="xml" binding="webHttpBinding" behaviorConfiguration="poxBehavior" contract="TestService"/> <endpoint address="mex" contract="IMetadataExchange" binding="mexHttpBinding" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="jsonBehavior"> <enableWebScript/> </behavior> <behavior name="poxBehavior"> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="TestServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> This actually works -- I'm able to go to TestService.svc/xml/GetDate for xml, TestService.svc/json/GetDate for json, and point a SOAP client at TestService.svc?wsdl and have the SOAP queries work. The part I'd like to fix is the queries. I have to use TestService.svc/xml/GetDateOffset?numDays=4 instead of TestService.svc/xml/GetDateOffset/4. If I specify the UriTemplate, I get the error: Endpoints using 'UriTemplate' cannot be used with 'System.ServiceModel.Description.WebScriptEnablingBehavior'. But of course without using <enableWebScript/>, JSON doesn't work. The only other thing I've seen that I think will work is making 3 different services (.svc files), that all implement an interface that specifies the contract, but in the classes specify different WebGet/WebInvoke attributes on each class. This seems like a lot of extra work, that frankly, I don't see why the framework doesn't handle for me. The implementation of the classes would all be the same, except for the attributes, which means over time it would be easy for bugs/changes to get fixed/done in one implementation but not the others, leading to inconsistent behaviour when using the JSON vs SOAP implementation for example. Am I doing something wrong here? Am I taking a totally wrong approach and misusing WCF? Is there a better way to do this? With my experience doing web stuff, I think it should be possible for some kind of framework to handle this ... I even have an idea in my head of how to build it. It just seems like WCF is supposed to be doing this, and I don't really want to reinvent the wheel.

    Read the article

  • Replacing build.xml with Build.java - using Java and the Ant libraries as a build system

    - by Dean Schulze
    I've grown disillusioned with Groovy based alternatives to Ant. AntBuilder doesn't work from within Eclipse, the Groovy plugin for Eclipse is disappointing, and Gradle just isn't ready yet. The Ant documentation has a section titled "Using Ant Tasks Outside of Ant" which gives a teaser for how to use the Ant libraries from Java code. There's another example here: http://www.mail-archive.com/[email protected]/msg16310.html In theory it seems simple enough to replace build.xml with Build.java. The Ant documentation hints at some undocumented dependencies that I'll have to discover (undocumented from the point of view of using Ant from within Java). Given the level of disappointment with Ant scripting, I wonder why this hasn't been done before. Perhaps it has and isn't a good build system. Has anyone tried writing build files in Java using the Ant libraries?

    Read the article

  • What happens when a consumer switch receives a VLAN-tagged Ethernet frame?

    - by netvope
    Suppose you connect a trunk port from a VLAN capable network switch to a (VLAN incapable) consumer-grade network switch via a direct cable. Now the former switch send the later switch a 802.1Q-tagged Ethernet frame. What should the later switch do? Drop the frame? Forward the frame? Undefined behavior? If the behavior is undefined, what is most probable? Edit: Thank you for your answers. To summarize, the behavior of the consumer switch depends on: How it handles frames with 0x8100 in the EtherType field1 How it handles jumbo frames, or frames with payload larger than 1500 bytes Wikipedia has a nice diagram comparing an untagged and a tagged Ethernet frame: There are reports that some consumer-grade switches pass VLAN-tagged frames just fine. 1 or more precisely, where an EtherType field is expected for non-tagged frames

    Read the article

  • How should clients handle HTTP 401 with unknown authentication schemes?

    - by user113215
    What is the proper behavior for an HTTP client receiving a 401 Unauthorized response that specifies only unrecognized authentication schemes? My server supports Kerberos authentication using WWW-Authenticate: Negotiate. On the first request, the server sends a 401 Unauthorized response with a body containing an HTML document. The behavior that I expect is for clients that support Kerberos to perform that authentication and for other clients to simply display the HTML document (a login form). It seems that most of the "other clients" I've encountered do work this way, but a few do not. I haven't found anything that mandates any particular behavior in this situation. There's a brief mention in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication, but is there anything more concrete? It is possible that a server may want to require Digest as its authentication method, even if the server does not know that the client supports it. A client is encouraged to fail gracefully if the server specifies only authentication schemes it cannot handle.

    Read the article

  • rhel/centos vs. ubuntu (possibly other debian-based systems) linux in handling duplicate ips in the same subnet

    - by johnshen64
    This has bothered me for quite a while but I never found out why or how to change the behavior. ip duplicates could be caused by typos or dhcp errors etc., but they do occur from time to time. in rpm-based systems such as centos, the old server with the duplicate ip wins, and the new server will get an error in bringing up the nic (ip address already used). this is somewhat harmless because we can just fix the system that is coming up. ubuntu only the other hand happily grabs the used ip for itself and leave the old server/device without a valid ip. this is the more dangerous behavior because it causes outages. what i want is to change the ubuntu behavior to that of the centos/rhel so would appreciate any help.

    Read the article

  • maxItemsInObjectGraph ignored...

    - by Palantir
    Hi! I have a problem with a WCF service, which tries to serialize too much data. From the trace I get an error which says that the maximum number of elements that can be serialized or unserialized is '65536', try to increment the MaxItemsInObjectGraph quota. So I went and modified this value, but it is just ignored (the error is the same, with the same number). All this is server-side. I am calling the service via wget for the moment. My web config is like this: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceBehavior"> <dataContractSerializer maxItemsInObjectGraph="131072" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="customBinding0" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:02:00"> <binaryMessageEncoding> <readerQuotas maxDepth="64" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="16384" maxNameTableCharCount="16384" /> </binaryMessageEncoding> <httpTransport /> </binding> </customBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service behaviorConfiguration="AlgoMap.Web.MapService.MapServiceBehavior" name="AlgoMap.Web.MapService.MapService"> <endpoint address="" binding="customBinding" bindingConfiguration="customBinding0" contract="AlgoMap.Web.MapService.MapService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> Version 2, not working either: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceEndpointBehavior"> <dataContractSerializer maxItemsInObjectGraph="131072" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="customBinding0" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:02:00"> <binaryMessageEncoding> <readerQuotas maxDepth="64" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="16384" maxNameTableCharCount="16384" /> </binaryMessageEncoding> <httpTransport /> </binding> </customBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service behaviorConfiguration="AlgoMap.Web.MapService.MapServiceBehavior" name="AlgoMap.Web.MapService.MapService"> <endpoint address="" binding="customBinding" bindingConfiguration="customBinding0" contract="AlgoMap.Web.MapService.MapService" behaviorConfiguration="AlgoMap.Web.MapService.MapServiceEndpointBehavior" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" behaviorConfiguration="AlgoMap.Web.MapService.MapServiceEndpointBehavior" /> </service> </services> </system.serviceModel> Can anyone help?? Thanks!!

    Read the article

  • WCF over http settings needed

    - by Crishna
    Hello, We currently have a WCF Service that works over https. But we want to change it to make it work over just http. Could any one tell me what all do I need to change to make the the wcf service work over http. Below is my config file values. Is there anything I else I need to cahnge other than the web.config?? ANy help greatly appreciated <bindings> <basicHttpBinding> <binding name="basicHttpBinding_Windows" maxReceivedMessageSize="500000000" maxBufferPoolSize="500000000" messageEncoding="Mtom"> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="Windows" /> </security> <readerQuotas maxDepth="500000000" maxArrayLength="500000000" maxBytesPerRead="500000000" maxNameTableCharCount="500000000" maxStringContentLength="500000000"/> </binding> </basicHttpBinding> </bindings> <behaviors> <endpointBehaviors> <behavior name="myproject_Behavior"> <dataContractSerializer /> <synchronousReceive /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="WebService.WSBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> <behavior name="WebService.Forms_WSBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="WebService.WSBehavior" name="IMMSWebService.mywebservice_WS"> <endpoint address="myproject_WS" binding="basicHttpBinding" bindingConfiguration="basicHttpBinding_Windows" bindingName="basicHttpBinding" contract="WebService.ICommand"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> <host> <timeouts closeTimeout="00:10:00" openTimeout="00:10:00" /> </host> </service> <service behaviorConfiguration="WebService.Forms_WSBehavior" name="WebService.Forms_WS"> <endpoint address="" binding="wsHttpBinding" contract="WebService.IForms_WS"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • How can I configure Adobe Help so it doesn't chatter so much with Adobe's domain?

    - by Michael Prescott
    Adobe Help that came with Creative Suite 5 and/or Flash Builder Pro is constantly creating network traffic with an Adobe site, www.wip4.adobe.com In the Adobe Help application Preferences, I find that I can change the settings so that I must manually download updates, but apparently the application still likes to call home and chatter non-stop with www.wip4.adobe.com. I could use something like Little Snitch to block all this spyware-like behavior, but I'd really prefer to just change the application's behavior. Is there a hidden setting or configuration file to adjust this behavior to something more appropriate and polite?

    Read the article

  • Word suddenly always on top, how to get rid of this?

    - by Abel
    For one reason or another, my Word suddenly decided to stay always on top of all other windows. This is terribly annoying. The odd thing is: of three documents I have open, two are on top of everything else, and one behaves normal. I found one other mention of this behavior. I wonder whether this is a known bug and whether there's a workaround. Sometimes closing all windows helps, but later the behavior creeps back. Other Office products don't seem to show this behavior. I'm using Microsoft Office Professional Plus 2010, 14.0.4760.1000 (64 bit).

    Read the article

  • Laptop GPU apparently blew up, motherboard doesn't even turn on its power LED. [But..]

    - by leladax
    If I take out the GPU, the motherboard LED turns on but then [if it attempts to power up and boot] it turns off after 2 seconds [fans turn on normally in that short period]. [Without the GPUs out there's not even an attempt to boot.] It's an SLI motherboard for a toshiba (model X200-219). If I take out one of the GPUs (they are on top of each other) it surprisingly lets the motherboard turn on too (as it is if both are out) but it still turns off after 2-3 seconds, same behavior. I wonder if it's the GPU that produces the 'turn off after being on' behavior and not something else. [Has anyone seen this behavior with blown up GPUs or could it be something else?]

    Read the article

  • Automatically changing a power profile when a laptop (un)docks?

    - by Dan
    I'm looking for a way to automatically change what happens when I close my laptop's lid depending on if it's in its docking station or not. In an idea world, the on-close behavior would be nothing (when docked) and sleep (when un-docked), but there only seems to be options for behavior when plugged-in and when on battery (when it's plugged in but not docked, I'd like it to sleep when closed). My initial idea would be to create a new power profile with this behavior, but I can't find a way to have it switch when docked (or for the power system to take its docked status into account at all). Any idea?

    Read the article

  • sudo like in Ubuntu (for Debian and other Linuxes)

    - by chris_l
    Hi, I personally like the default sudo behavior of Ubuntu: - Root login impossible - "admin" group granted "ALL=(ALL) ALL" - users in the "admin" group are asked for their user password (not a root password) when using sudo. [I like it, because this way, there's no root password to be shared among several people. There may be good reasons for other opinions, too - but that shouldn't be the topic of this question.] Now I'm trying to re-create this behavior in Debian Etch. It basically works, but there's one important difference: Debian doesn't ask for a password. It should ask for the user's password. I edited the sudoers file to be exactly the same as in Ubuntu, and I added a user to the newly created "admin" group. What else do I have to do to get the Ubuntu behavior in Debian (and other Linuxes)? Thanks Chris

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Messing with the Team

    - by Robert May
    Good Product Owners will help the team be the best that they can be.  Bad product owners will mess with the team and won’t care about the team.  If you’re a product owner, seek to do good and avoid bad behavior at all costs.  Remember, this is for YOUR benefit and you have much power given to you.  Use that power wisely. Scope Creep The product owner has several tools at his disposal to inject scope into an iteration.  First, the product owner can use defects to inject scope.  To do this, they’ll tell the team what functionality that they want to see in a feature.  Then, after the feature is developed, the Product Owner will decide that they don’t really like how the functionality behaves.  To change it, rather than creating a new story, they’ll add a defect.  The functionality is correct, as designed, but the Product Owner doesn’t like it.  By creating the defect, the Product Owner destroys the trust that the team has of the product owner.  They may not be able to count the story, because the Product Owner changed the story in the iteration, and the team then ends up looking like they have low velocity for something over which they have no control.  This is bad.  One way to deal with this is to add “Product Owner Time” to the iteration.  This will slow the velocity, but then the ScrumMaster can tell stake holders that this time is strictly in place to deal with bad behavior of the Product Owner. Another mechanism often used to inject Scope is the concept of directed development.  Outside of planning, stand-ups, or any other meeting, the Product Owner will take a developer aside and ask them to complete a task for them.  This is bad!  The team should be allocating all of their time to development.  If the Product Owner asks for a favor, then time that would normally be used for development will be used for a pet project of the Product Owner and the team will not get credit for this work.  Selfish product owners do this, and I typically see people who were “managers” do this behavior.  Authoritarian command and control development environments also see this happen.  The best thing that can happen is for the team member to report the issue to the ScrumMaster and the ScrumMaster to get very aggressive with management and the Product Owner to try and stop the behavior.  This may result in the ScrumMaster being fired, but if the behavior continues, Scrum is doomed.  This problem is especially bad in cases where the team member’s direct supervisor is the Product Owner.  I don’t recommend that the Product Owner or ScrumMaster have a direct report relationship with team members, since team members need the ability to say no.  To work around this issue, team members need to say no.  If that fails, team members need to add extra time to the iteration to deal with the scope creep injection and accept the lower velocity. As discussed above, another mechanism for injecting scope is by changing acceptance tests after the work is complete.  This is similar to adding defects to change scope and is bad.  To get around, add time for Product Owner uncertainty to the iteration and make sure that stakeholders are aware of the need to add this time because of the Product Owner. Refusing to Prioritize Refusing to prioritize causes chaos for the team.  From the team’s perspective, things that are not important will be worked on while things that the team knows are vital will be ignored.  A poor Product Owner will often pick the stories for the iteration on a whim.  This leads to the team working on many different aspects of the product and results in a lower velocity, since each iteration the team must switch context to the new area of development. The team will also experience confusion about priorities.  In one iteration, Feature X was the highest priority and had to be done.  Then, the following iteration, even though parts of Feature X still need to be completed, no stories to address them will be in the iteration.  However, three iterations later, Feature X will again become high priority. This will cause the team to not trust the Product Owner, and eventually, they’ll stop caring about the features they implement.  They won’t know what is important, so to insulate themselves from the ever changing chaos, they’ll become apathetic to all features.  Team members are some of the most creative people in a company.  By losing their engagement, the company is going to have a substandard product because the passion for the product won’t be in the team. Other signs that the Product Owner refuses to prioritize is that no one outside of the product owner will be consulted on priorities.  Additionally, the product, release, and iteration backlogs will be weak or non-existent. Dealing with this issue is not easy.  This really isn’t something the team can fix, short of taking over the role of Product Owner themselves.  An appeal to the stake holders might work, but only if the Product Owner isn’t a “manager” themselves.  The ScrumMaster needs to protect the team and do what they can to either get the Product Owner to prioritize or have the Product Owner replaced. Managing the Team A Product Owner that is also the “boss” of team members is a Scrum team that is waiting to fail.  If your boss tells you to do something, failing to do that something can cause you to be fired.  The team needs the ability to tell the Product Owner NO.  If the product owner introduces scope creep, the team has a responsibility to tell the Product Owner no.  If the Product Owner tries to get the team to commit to more than they can accomplish in an iteration, the team needs the ability to tell the Product Owner no. If the Product Owner is your boss and determines your pay increases, you’re probably not going to ever tell them no, and Scrum will likely fail.  The team can’t do much in this situation. Another aspect of “managing the team” that often happens is the Product Owner tries to tell the team how to develop the stories that are in the iteration.  This is one reason why I recommend that Product Owners are NOT technical people.  That way, the team can come up with the tasks that are needed to accomplish the stories and the Product Owner won’t know better.  If the Product Owner is technical, the ScrumMaster will need to take great care to protect the team from the ScrumMaster changing how the team thinks they need to implement the stories. Product Owners can also try to manage the team by their body language.  If the team says a task is going to take 6 hours to complete, and the Product Owner disagrees, they will use some kind of sour body language to indicate this disagreement.  In weak teams, this may cause the team to revise their estimate down, which will result in them taking longer than estimated and may result in them missing the iteration.  The ScrumMaster will need to make sure that the Product Owner doesn’t send such messages and that the team ignores them and estimates what they REALLY think it will take to complete the tasks.  Forcing the team to deal with such items in the retrospective can be helpful. Absenteeism The team is completely dependent upon the Product Owner to develop features for the customer.  The Product Owner IS the voice of the customer and without them, the team will lack direction.  Being the Product Owner is a full time job!  If the Product Owner cannot dedicate daily time with the team, a different product owner should be found. The Product Owner needs to attend every stand-up, planning meeting, showcase, and retrospective that the team has.  The team also must be able to have instant communication with the product owner.  They must not be required to schedule meetings to speak with their product owner.  The team must be the highest priority task that the Product Owner has. The best way to work around an absent Product Owner is to appoint a new Product Owner in the team.  This person will be responsible for making the decisions that the Product Owner should be making and to act as the liaison to the absent Product Owner.  If the delegate Product Owner doesn’t have authority to make decisions for the team, Scrum will fail.  If the Product Owner is absent, the ScrumMaster should seek to have that Product Owner replaced by someone who has the time and ability to be a real Product Owner. Making it Personal Too often Product Owners will become convinced that their ideas are the ones that matter and that anyone who disagrees is making a personal attack on them.  Remember that Product Owners will inherently be at odds with many people, simply because they have the need to prioritize.  Others will frequently question prioritization because they only see part of the picture that Product Owners face. Product Owners must have a thick skin and think egos.  If they don’t, they tend to make things personal, which causes them to become emotional and causes them to take actions that can destroy the trust that team members have in the Product Owner. If a Product Owner is making things person, the best thing that team members can do is reassure them that its not personal, but be firm about doing what is best for the Company and for the users.  The ScrumMaster should also spend significant time coaching the Product Owner on how to not react emotionally and how to accept criticism without becoming defensive. Conclusion I’m sure there are other ways that a Product Owner can mess with the team, but these are the most common that I’ve seen.  I would encourage all Product Owners to seek to be a good Product Owner.  If you find yourself behaving in any of the bad product owner ways, change your behavior today!  Your team will thank you. Remember, being Product Owner is very difficult!  Product Owner is one of the most difficult roles in Scrum.  However, it can also be one of the most rewarding roles in Scrum, since Product Owners literally see their ideas brought to life on the computer screen.  Product Owners need to be very patient, even in the face of criticism and need to be willing to make tough decisions on priority, but then not become offended when others disagree with those decisions.  Companies should spend the time needed to find the right product owners for their teams.  Doing so will only help the company to write better software. Technorati Tags: Scrum,Product Owner

    Read the article

  • When I try to click and launch some of the links set to open in new window, it is being treated as a pop-up window [migrated]

    - by Test Developer
    For the past few days, we are facing issue with the chrome browser behavior. This is related to opening links set to open in new tab/window. The details are as follow: I have a collection of links and each link points to different resource to be opened in a new tab/window. The code is as follow: <a class="cssClass" rid="1114931" href="http://www.domain.com/resources/abc.html" title="Link1" tabindex="4">Link 1</a> And there are few checks/filters over accessing the resources which have been implemented as onClick handler over the links. In case any of the validations fails, the onClick handler returns false and the default behavior of the link does not happens i.e. links does not get open. One of such (last) checks includes AJAX call in sync mode. The code is as follow: var link_clickHandler = function(evt/* Event */) { var objTarget = jQuery(evt.target); if(check1) { return false; } else if(check2) { return false; } else if(check3) { var blnRetVal = false; jQuery.ajax( { "async" : false, "type" : "GET", "contentType" : "application/json; charset=utf-8", "url" : "index.php", "data" : 'resourceid=' + intResourceId, "dataType" : "json", "forceData" : true, "success" : function(data) { if(check1) { blnRetVal = true; } } "error" : function(error) { } } ); return blnRetVal; } }; jQuery("a.cssClass").live("click", link_clickHandler); ISSUE: The issue is that Chrome is behaving very weirdly manner. In case all of the checks are passed and onClick handler returns true, sometimes the resource get opened in a new tab/window and sometimes it get opened as a pop-up (which should never). Tried to capture any pattern but could not succeed. Any solution or even helping in understanding behavior would be really appreciated.

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • How to communicate within a company what is being Continually Deployed

    - by Francis Spor
    I work for a small development company, 20 people total in the entire company, 3 in actual development, and we've adopted CD for our commits to trunk, and it works great, from a code management and up-time side. However - we're getting flak from our support staff and marketing department that they don't feel that they're getting enough lead time on new features and notifications on bug fixes that could change behavior. Part of why we love the CD system is for us in development, it's fast, we fix the bug, add the quick feature, close the Bugz and move on with our day to the next item. All members of our company are now on HipChat at all times, and when a deployment occurs, a message is sent to a room that all company members are in, letting them know what was just deployed (it just shows the commit messages from tip back to the last recorded deployment). We in development are also attempting to make sure that when we're making a change that modifies the UI or a public facing behavior, we post a screenshot to the All Company room and explain what the behavior change is, seeking pushback or concerns. Often, the response is silence. Sometimes, it's a few minor questions, but nothing that need stop the deployment from happening. What I'm wondering is how do other users of the CD method deal with notifications of new features and changes to areas of the company that are not development - and eventually on to customers in the world? Thanks, Francis

    Read the article

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • Keyboard-shortcut key-press-detection sensitivity settings

    - by Juve
    last week I switched from Ubuntu 10.10 to 12.04 and after setting up my keyboard shortcuts, e.g., CTRL+ALT+E for my favorite editor and CTRL+ALT+X for the terminal, I noticed that the behavior when pressing the appropriate keys changed somehow. I know this is very subjective and I am not 100% percent sure if I am suddenly just too lazy when using my keyboard, but here is what I noticed: To run your shortcuts, you usually press the modifiers first and in addition press the alphanum key. Now, if I hold the modifiers down very consciously and press the alphanum key afterwards everything works fine. However, I noticed that I may often release the modifiers a bit too early. In Ubuntu 10.10 (metacity/compiz) my keyboard shortcuts would still execute and my tools would pop up. This does not work anymore in 12.04. Nevertheless, I still believe the old behavior to be more intuitive and would like to have it back. I a nutshell: Is there a parameter to control the shortcut-key-press detection behavior? I already searched the ubuntu keyboard options and searched for "keyboard" in gconf-editor but could not find any hints so far.

    Read the article

  • Cleaning Up Online Games with Positive Enforcement

    - by Jason Fitzpatrick
    Anyone who has played online multiplayer games, especially those focused on combat, can attest to how caustic other players can be. League of Legends creators are fighting that, rather successfully, with a positive-reinforcement honor system. The Mary Sue reports: Here’s the background: Six months ago, Riot established Team Player Behavior — affectionately called Team PB&J — a group of experts in psychology, neuroscience, and statistics (already, I am impressed). At the helm is Jeffrey Lin, better known as Dr. Lyte, Riot’s lead designer of social systems. As quoted in a recent article at Polygon: We want to show other companies and other games that it is possible to tackle player behavior, and with certain systems and game design tools, we can shape players to be more positive. Which brings us to the Honor system. Honor is a way for players to reward each other for good behavior. This is divvied up into four categories: Friendly, Helpful, Teamwork, and Honorable Opponent. At the end of a match, players can hand out points to those they deem worthy. These points are reflected on players’ profiles, but do not result in any in-game bonuses or rewards (though this may change in the future). All Honor does is show that you played nicely. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >