Search Results

Search found 2677 results on 108 pages for 'eren trigger'.

Page 30/108 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • WTF why does this SQL work in VS but not in CODE?

    - by acidzombie24
    The line cmd.ExecuteNonQuery(); cmd.CommandText CREATE TRIGGER subscription_trig_0 ON subscription AFTER INSERT AS UPDATE user_data SET msg_count=msg_count+1 FROM user_data JOIN INSERTED ON user_data.id = INSERTED.recipient; The exception: Incorrect syntax near the keyword 'TRIGGER'. Then using VS 2010, connected to the very same file (a mdf file) i run the query above and i get a success message. WTF!

    Read the article

  • Triggering jquery event when an element appears on screen

    - by 0al0
    I want to show a fade effect as soon as the element appears on screen. There is a lot of content before this element so if I trigger the efect on document.ready, at certain resolutions the vistors wont´t see it. Is it possible to trigger an event when, after scrolling down, the element becomes visible? I am almost sure I have seen this effect before, but have no idea how to achieve it. Thank you!

    Read the article

  • Why does this SQL work in VS but not in code?

    - by acidzombie24
    The line cmd.ExecuteNonQuery(); cmd.CommandText CREATE TRIGGER subscription_trig_0 ON subscription AFTER INSERT AS UPDATE user_data SET msg_count = msg_count + 1 FROM user_data JOIN INSERTED ON user_data.id = INSERTED.recipient; The exception: Incorrect syntax near the keyword 'TRIGGER'. Then using VS 2010, connected to the very same file (a mdf file) i run the query above and i get a success message.

    Read the article

  • HTTP triggers for Postgres

    - by HeineyBehinds
    I'm trying to write a Postgres trigger such that when a configuration table is updated, a backend component is notified and can handle the change. I know that Oracle has the concept of a web/HTTP trigger, where you can execute an HTTP GET from the Oracle instance itself to a URL that can then handle the request at the application layer. I'm wondering if Postgres (v. 9.0.5) has the same feature, or comes with anything similar (and, subsequently, how to set it up/configure it)?

    Read the article

  • How to realize this in Transact SQL ( SQL Server 2008 )

    - by David
    I just want an update trigger like this postgresql version... It seems to me there is no NEW. and OLD. in MSSQL? CREATE OR REPLACE FUNCTION "public"."ts_create" () RETURNS trigger AS DECLARE BEGIN NEW.ctime := now(); RETURN NEW; END; Googled already, but nothing to be found unfortunately... Ideas?

    Read the article

  • Ttrigger extra paramter

    - by fire
    According to the jQuery manual you can send extra parameters (as an array) when calling a trigger. I am using this at the moment: $('#page0').trigger('click', [true]); How would I pick up whether the paramter has come through or not when using this? $('ul.pages li a').click(function() { // Do stuff if true has been passed as an extra parameter });

    Read the article

  • Overview of SOA Diagnostics in 11.1.1.6

    - by ShawnBailey
    What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections. In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. RDA (Remote Diagnostic Agent) RDA is a standalone tool that is used to collect both static configuration and dynamic runtime information from the SOA environment. RDA is generally run manually from the command line against a domain or single server. When opening a new Service Request, including an RDA collection can dramatically decrease the back and forth required to collect logs and configuration information for Support. After installing RDA you configure it to use the SOA Suite module as decribed in the referenced resources. The SOA module includes the Oracle WebLogic Server (WLS) module by default in order to include all of the relevant information for the environment. In addition to this basic configuration there is also an advanced mode where you can set the number of thread dumps for the collections, log files, Incidents, etc. When would you use it? When creating a Service Request or otherwise working with Oracle resources on an issue, capturing environment snapshots to baseline your configuration or to diagnose an issue on your own. How is it related to the other tools? RDA is related to DFW in that it collects the last 10 Incidents from the server by default. In a similar manner, RDA is related to ODL through its collection of the diagnostic logs and these may contain information from Selective Tracing sessions. Examples of what it currently collects: (for details please see the links in the Resources section) Diagnostic Logs (ODL) Diagnostic Framework Incidents (DFW) SOA MDS Deployment Descriptors SOA Repository Summary Statistics Thread Dumps Complete Domain Configuration RDA Resources: Webcast Recording: Using RDA with Oracle SOA Suite 11g Blog Post: Diagnose SOA Suite 11g Issues Using RDA Download RDA How to Collect Analysis Information Using RDA for Oracle SOA Suite 11g Products [ID 1350313.1] How to Collect Analysis Information Using RDA for Oracle SOA Suite and BPEL Process Manager 11g [ID 1352181.1] Getting Started With Remote Diagnostic Agent: Case Study - Oracle WebLogic Server (Video) [ID 1262157.1] top DFW (Diagnostic Framework) DFW provides the ability to collect specific information for a particular problem when that problem occurs. DFW is included with your SOA Suite installation and deployed to the domain. Let's define the components of DFW. Diagnostic Dumps: Specific diagnostic collections that are defined at either the 'system' or product level. Examples would be diagnostic logs or thread dumps. Incident: A collection of Diagnostic Dumps associated with a particular problem Log Conditions: An Oracle Diagnostic Logging event that DFW is configured to listen for. If the event is identified then an Incident will be created. WLDF Watch: The WebLogic Diagnostic Framework or 'WLDF' is not a component of DFW, however, it can be a source of DFW Incident creation through the use of a 'Watch'. WLDF Notification: A Notification is a component of WLDF and is the link between the Watch and DFW. You can configure multiple Notification types in WLDF and associate them with your Watches. 'FMWDFW-notification' is available to you out of the box to allow for DFW notification of Watch execution. Rule: Defines a WLDF Watch or Log Condition for which we want to associate a set of Diagnostic Dumps. When triggered the specified dumps will be collected and added to the Incident Rule Action: Defines the specific Diagnostic Dumps to collect for a particular rule ADR: Automatic Diagnostics Repository; Defined for every server in a domain. This is where Incidents are stored Now let's walk through a simple flow: Oracle Web Services error message OWS-04086 (SOAP Fault) is generated on managed server 1 DFW Log Condition for OWS-04086 evaluates to TRUE DFW creates a new Incident in the ADR for managed server 1 DFW executes the specified Diagnostic Dumps and adds the output to the Incident In this case we'll grab the diagnostic log and thread dump. We might also want to collect the WSDL binding information and SOA audit trail When would you use it? When you want to automatically collect Diagnostic Dumps at a particular time using a trigger or when you want to manually collect the information. In either case it can be readily uploaded to Oracle Support through the Service Request. How is it related to the other tools? DFW generates Incidents which are collections of Diagnostic Dumps. One of the system level Diagonstic Dumps collects the current server diagnostic log which is generated by ODL and can contain information from Selective Tracing sessions. Incidents are included in RDA collections by default and ADRCI is a tool that is used to package an Incident for upload to Oracle Support. In addition, both ODL and DMS can be used to trigger Incident creation through DFW. The conditions and rules for generating Incidents can become quite complicated and the below resources go into more detail. A simpler approach to leveraging at least the Diagnostic Dumps is through WLST (WebLogic Scripting Tool) where there are commands to do the following: Create an Incident Execute a single Diagnostic Dump Describe a Diagnostic Dump List the available Diagnostic Dumps The WLST option offers greater control in what is generated and when. It can be a great help when collecting information for Support. There are overlaps with RDA, however, DFW is geared towards collecting specific runtime information when an issue occurs while existing Incidents are collected by RDA. There are 3 WLDF Watches configured by default in a SOA Suite 11g domain: Stuck Threads, Unchecked Exception and Deadlock. These Watches are enabled by default and will generate Incidents in ADR. They are configured to reset automatically after 30 seconds so they have the potential to create multiple Incidents if these conditions are consistent. The Incidents generated by these Watches will only contain System level Diagnostic Dumps. These same System level Diagnostic Dumps will be included in any application scoped Incident as well. Starting in 11.1.1.6, SOA Suite is including its own set of application scoped Diagnostic Dumps that can be executed from WLST or through a WLDF Watch or Log Condition. These Diagnostic Dumps can be added to an Incident such as in the earlier example using the error code OWS-04086. soa.config: MDS configuration files and deployed-composites.xml soa.composite: All artifacts related to the deployed composite soa.wsdl: Summary of endpoints configured for the composite soa.edn: EDN configuration summary if applicable soa.db: Summary DB information for the SOA repository soa.env: Coherence cluster configuration summary soa.composite.trail: Partial audit trail information for the running composite The current release of RDA has the option to collect the soa.wsdl and soa.composite Diagnostic Dumps. More Diagnostic Dumps for SOA Suite products are planned for future releases along with enhancements to DFW itself. DFW Resources: Webcast Recording: SOA Diagnostics Sessions: Diagnostic Framework Diagnostic Framework Documentation DFW WLST Command Reference Documentation for SOA Diagnostic Dumps in 11.1.1.6 top Selective Tracing Selective Tracing is a facility available starting in version 11.1.1.4 that allows you to increase the logging level for specific loggers and for a specific context. What this means is that you have greater capability to collect needed diagnostic log information in a production environment with reduced overhead. For example, a Selective Tracing session can be executed that only increases the log level for one composite, only one logger, limited to one server in the cluster and for a preset period of time. In an environment where dozens of composites are deployed this can dramatically reduce the volume and overhead of the logging without sacrificing relevance. Selective Tracing can be administered either from Enterprise Manager or through WLST. WLST provides a bit more flexibility in terms of exactly where the tracing is run. When would you use it? When there is an issue in production or another environment that lends itself to filtering by an available context criteria and increasing the log level globally results in too much overhead or irrelevant information. The information is written to the server diagnostic log and is exportable from Enterprise Manager How is it related to the other tools? Selective Tracing output is written to the server diagnostic log. This log can be collected by a system level Diagnostic Dump using DFW or through a default RDA collection. Selective Tracing also heavily leverages ODL fields to determine what to trace and to tag information that is part of a particular tracing session. Available Context Criteria: Application Name Client Address Client Host Composite Name User Name Web Service Name Web Service Port Selective Tracing Resources: Webcast Recording: SOA Diagnostics Session: Using Selective Tracing to Diagnose SOA Suite Issues How to Use Selective Tracing for SOA [ID 1367174.1] Selective Tracing WLST Reference top DMS (Dynamic Monitoring Service) DMS exposes runtime information for monitoring. This information can be monitored in two ways: Through the DMS servlet As exposed MBeans The servlet is deployed by default and can be accessed through http://<host>:<port>/dms/Spy (use administrative credentials to access). The landing page of the servlet shows identical columns of what are known as Noun Types. If you select a Noun Type you will see a table in the right frame that shows the attributes (Sensors) for the Noun Type and the available instances. SOA Suite has several exposed Noun Types that are available for viewing through the Spy servlet. Screenshots of the Spy servlet are available in the Knowledge Base article How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS). Every Noun instance in the runtime is exposed as an MBean instance. As such they are generally available through an MBean browser and available for monitoring through WLDF. You can configure a WLDF Watch to monitor a particular attribute and fire a notification when the threshold is exceeded. A WLDF Watch can use the out of the box DFW notification type to notify DFW to create an Incident. When would you use it? When you want to monitor a metric or set of metrics either manually or through an automated system. When you want to trigger a WLDF Watch based on a metric exposed through DMS. How is it related to the other tools? DMS metrics can be monitored with WLDF Watches which can in turn notify DFW to create an Incident. DMS Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How to Reset a SOA 11g DMS Metric DMS Documentation top ODL (Oracle Diagnostic Logging) ODL is the primary facility for most Fusion Middleware applications to log what they are doing. Whenever you change a logging level through Enterprise Manager it is ultimately exposed through ODL and written to the server diagnostic log. A notable exception to this is WebLogic Server which uses its own log format / file. ODL logs entries in a consistent, structured way using predefined fields and name/value pairs. Here's an example of a SOA Suite entry: [2012-04-25T12:49:28.083-06:00] [AdminServer] [ERROR] [] [oracle.soa.bpel.engine] [tid: [ACTIVE].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: ] [ecid: 0963fdde7e77631c:-31a6431d:136eaa46cda:-8000-00000000000000b4,0] [errid: 41] [WEBSERVICE_PORT.name: BPELProcess2_pt] [APP: soa-infra] [composite_name: TestProject2] [J2EE_MODULE.name: fabric] [WEBSERVICE.name: bpelprocess1_client_ep] [J2EE_APP.name: soa-infra] Error occured while handling a post operation[[ When would you use it? You'll use ODL almost every time you want to identify and diagnose a problem in the environment. The entries are written to the server diagnostic log. How is it related to the other tools? The server diagnostic logs are collected by DFW and RDA. Selective Tracing writes its information to the diagnostic log as well. Additionally, DFW log conditions are triggered by ODL log events. ODL Resources: ODL Documentation top ADR (Automatic Diagnostics Repository) ADR is not a tool in and of itself but is where DFW stores the Incidents it creates. Every server in the domain has an ADR location which can be found under <SERVER_HOME>/adr. This is referred to the as the ADR 'Base' location. ADR also has what are known as 'Home' locations. Example: You have a domain called 'myDomain' and an associated managed server called 'myServer'. Your admin server is called 'AdminServer'. Your domain home directory is called 'myDomain' and it contains a 'servers' directory. The 'servers' directory contains a directory for the managed server called 'myServer' and here is where you'll find the 'adr' directory which is the ADR 'Base' location for myServer. To get to the ADR 'Home' locations we drill through a few levels: diag/ofm/myDomain/ In an 11.1.1.6 SOA Suite domain you will see 2 directories here, 'myServer' and 'soa-infra'. These are the ADR 'Home' locations. 'myServer' is the 'system' ADR home and contains system level Incidents. 'soa-infra' is the name that SOA Suite used to register with DFW and this ADR home contains SOA Suite related Incidents Each ADR home location contains a series of directories, one of which is called 'incident'. This is where your Incidents are stored. When would you use it? It's a good idea to check on these locations from time to time to see whether a lot of Incidents are being generated. They can be cleaned out by deleting the Incident directories or through the ADRCI tool. If you know that an Incident is of particular interest for an issue you're working with Oracle you can simply zip it up and provide it. How does it relate to the other tools? ADR is obviously very important for DFW since it's where the Incidents are stored. Incidents contain Diagnostic Dumps that may relate to diagnostic logs (ODL) and DMS metrics. The most recent 10 Incident directories are collected by RDA by default and ADRCI relies on the ADR locations to help manage the contents. top ADRCI (Automatic Diagnostics Repository Command Interpreter) ADRCI is a command line tool for packaging and managing Incidents. When would you use it? When purging Incidents from an ADR Home location or when you want to package an Incident along with an offline RDA collection for upload to Oracle Support. How does it relate to the other tools? ADRCI contains a tool called the Incident Packaging System or IPS. This is used to package an Incident for upload to Oracle Support through a Service Request. Starting in 11.1.1.6 IPS will attempt to collect an offline RDA collection and include it with the Incident package. This will only work if Perl is available on the path, otherwise it will give a warning and package only the Incident files. ADRCI Resources: How to Use the Incident Packaging System (IPS) in SOA 11g [ID 1381259.1] ADRCI Documentation top WLDF (WebLogic Diagnostic Framework) WLDF is functionality available in WebLogic Server since version 9. Starting with FMw 11g a link has been added between WLDF and the pre-existing DFW, the WLDF Watch Notification. Let's take a closer look at the flow: There is a need to monitor the performance of your SOA Suite message processing A WLDF Watch is created in the WLS console that will trigger if the average message processing time exceeds 2 seconds. This metric is monitored through a DMS MBean instance. The out of the box DFW Notification (the Notification is called FMWDFW-notification) is added to the Watch. Under the covers this notification is of type JMX. The Watch is triggered when the threshold is exceeded and fires the Notification. DFW has a listener that picks up the Notification and evaluates it according to its rules, etc When it comes to automatic Incident creation, WLDF is a key component with capabilities that will grow over time. When would you use it? When you want to monitor the WLS server log or an MBean metric for some condition and fire a notification when the Watch is triggered. How does it relate to the other tools? WLDF is used to automatically trigger Incident creation through DFW using the DFW Notification. WLDF Resources: How to Monitor Runtime SOA Performance With the Dynamic Monitoring Service (DMS) [ID 1368291.1] How To Script the Creation of a SOA WLDF Watch in 11g [ID 1377986.1] WLDF Documentation top

    Read the article

  • Tab Sweep: Java EE 6 Scopes, Observer, SSL, Workshop, Virtual Server, JDBC Connection Validation

    - by arungupta
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • How Java EE 6 Scopes Affect User Interactions (DevX.com) • Why is Java EE 6 better than Spring ? (Arun Gupta) • JavaEE Revisits Design Patterns: Observer (Murat Yener) • Getting started with Glassfish V3 and SSL (JavaDude) • Software stacks market share within Jelastic: March 2012 (Jelastic) • All aboard the Java EE 6 Love Boat! (Bert Ertman) • Full stack Java EE workshop (Kito Mann) • Create a virtual server from console in glassfish (Hector Guzman) • Glassfish – JDBC Connection Validation explained (Alexandru Ersenie) • Automatically setting the label of a component in JSF 2 (Arjan Tijms) • JSF2 + Primefaces3 + Spring3 & Hibernate4 Integration Project (Eren Avsarogullari) • THE EXECUTABLE FEEL OF JAX-RS 2.0 CLIENT (Adam Bien) Here are some tweets from this week ... web-app dtd(s) on http://t.co/4AN0057b R.I.P. using http://t.co/OTZrOEEr instead. Thank you Oracle! finally got GlassFish and Cassandra running embedded so I can unit test my app #jarhell #JavaEE6 + #NetBeans is really a pleasure to work with! Reading latest chapter in #Spring vs #JavaEE wars https://t.co/RqlGmBG9 (and yes, #JavaEE6 is better :P) @javarebel very easy install and very easy to use in combination with @netbeans and @glassfish. Save your time.

    Read the article

  • RequestValidation Changes in ASP.NET 4.0

    - by Rick Strahl
    There’s been a change in the way the ValidateRequest attribute on WebForms works in ASP.NET 4.0. I noticed this today while updating a post on my WebLog all of which contain raw HTML and so all pretty much trigger request validation. I recently upgraded this app from ASP.NET 2.0 to 4.0 and it’s now failing to update posts. At first this was difficult to track down because of custom error handling in my app – the custom error handler traps the exception and logs it with only basic error information so the full detail of the error was initially hidden. After some more experimentation in development mode the error that occurs is the typical ASP.NET validate request error (‘A potentially dangerous Request.Form value was detetected…’) which looks like this in ASP.NET 4.0: At first when I got this I was real perplexed as I didn’t read the entire error message and because my page does have: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="NewEntry.aspx.cs" Inherits="Westwind.WebLog.NewEntry" MasterPageFile="~/App_Templates/Standard/AdminMaster.master" ValidateRequest="false" EnableEventValidation="false" EnableViewState="false" %> WTF? ValidateRequest would seem like it should be enough, but alas in ASP.NET 4.0 apparently that setting alone is no longer enough. Reading the fine print in the error explains that you need to explicitly set the requestValidationMode for the application back to V2.0 in web.config: <httpRuntime executionTimeout="300" requestValidationMode="2.0" /> Kudos for the ASP.NET team for putting up a nice error message that tells me how to fix this problem, but excuse me why the heck would you change this behavior to require an explicit override to an optional and by default disabled page level switch? You’ve just made a relatively simple fix to a solution a nasty morass of hard to discover configuration settings??? The original way this worked was perfectly discoverable via attributes in the page. Now you can set this setting in the page and get completely unexpected behavior and you are required to set what effectively amounts to a backwards compatibility flag in the configuration file. It turns out the real reason for the .config flag is that the request validation behavior has moved from WebForms pipeline down into the entire ASP.NET/IIS request pipeline and is now applied against all requests. Here’s what the breaking changes page from Microsoft says about it: The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing. In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request. As a result, request validation errors might now occur for requests that previously did not trigger errors. To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file: <httpRuntime requestValidationMode="2.0" /> However, we recommend that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors. Ok, so ValidateRequest of the form still works as it always has but it’s actually the ASP.NET Event Pipeline, not WebForms that’s throwing the above exception as request validation is applied to every request that hits the pipeline. Creating the runtime override removes the HttpRuntime checking and restores the WebForms only behavior. That fixes my immediate problem but still leaves me wondering especially given the vague wording of the above explanation. One thing that’s missing in the description is above is one important detail: The request validation is applied only to application/x-www-form-urlencoded POST content not to all inbound POST data. When I first read this this freaked me out because it sounds like literally ANY request hitting the pipeline is affected. To make sure this is not really so I created a quick handler: public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World <hr>" + context.Request.Form.ToString()); } public bool IsReusable { get { return false; } } } and called it with Fiddler by posting some XML to the handler using a default form-urlencoded POST content type: and sure enough – hitting the handler also causes the request validation error and 500 server response. Changing the content type to text/xml effectively fixes the problem however, bypassing the request validation filter so Web Services/AJAX handlers and custom modules/handlers that implement custom protocols aren’t affected as long as they work with special input content types. It also looks that multipart encoding does not trigger event validation of the runtime either so this request also works fine: POST http://rasnote/weblog/handler1.ashx HTTP/1.1 Content-Type: multipart/form-data; boundary=------7cf2a327f01ae User-Agent: West Wind Internet Protocols 5.53 Host: rasnote Content-Length: 40 Pragma: no-cache <xml>asdasd</xml>--------7cf2a327f01ae *That* probably should trigger event validation – since it is a potential HTML form submission, but it doesn’t. New Runtime Feature, Global Scope Only? Ok, so request validation is now a runtime feature but sadly it’s a feature that’s scoped to the ASP.NET Runtime – effective scope to the entire running application/app domain. You can still manually force validation using Request.ValidateInput() which gives you the option to do this in code, but that realistically will only work with the requestValidationMode set to V2.0 as well since the 4.0 mode auto-fires before code ever gets a chance to intercept the call. Given all that, the new setting in ASP.NET 4.0 seems to limit options and makes things more difficult and less flexible. Of course Microsoft gets to say ASP.NET is more secure by default because of it but what good is that if you have to turn off this flag the very first time you need to allow one single request that bypasses request validation??? This is really shortsighted design… <sigh>© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Creating a Build Definition using the TFS 2010 API

    - by Jakob Ehn
    In this post I will show how to create a new build definition in TFS 2010 using the TFS API. When creating a build definition manually, using Team Explorer, the necessary steps are lined out in the New Build Definition Wizard:     So, lets see how the code looks like, using the same order. To start off, we need to connect to TFS and get a reference to the IBuildServer object: TfsTeamProjectCollection server = newTfsTeamProjectCollection(newUri("http://<tfs>:<port>/tfs")); server.EnsureAuthenticated(); IBuildServer buildServer = (IBuildServer) server.GetService(typeof (IBuildServer)); General First we create a IBuildDefinition object for the team project and set a name and description for it: var buildDefinition = buildServer.CreateBuildDefinition(teamProject); buildDefinition.Name = "TestBuild"; buildDefinition.Description = "description here..."; Trigger Next up, we set the trigger type. For this one, we set it to individual which corresponds to the Continuous Integration - Build each check-in trigger option buildDefinition.ContinuousIntegrationType = ContinuousIntegrationType.Individual; Workspace For the workspace mappings, we create two mappings here, where one is a cloak. Note the user of $(SourceDir) variable, which is expanded by Team Build into the sources directory when running the build. buildDefinition.Workspace.AddMapping("$/Path/project.sln", "$(SourceDir)", WorkspaceMappingType.Map); buildDefinition.Workspace.AddMapping("$/OtherPath/", "", WorkspaceMappingType.Cloak); Build Defaults In the build defaults, we set the build controller and the drop location. To get a build controller, we can (for example) use the GetBuildController method to get an existing build controller by name: buildDefinition.BuildController = buildServer.GetBuildController(buildController); buildDefinition.DefaultDropLocation = @\\SERVER\Drop\TestBuild; Process So far, this wasy easy. Now we get to the tricky part. TFS 2010 Build is based on Windows Workflow 4.0. The build process is defined in a separate .XAML file called a Build Process Template. By default, every new team team project containtwo build process templates called DefaultTemplate and UpgradeTemplate. In this sample, we want to create a build definition using the default template. We use te QueryProcessTemplates method to get a reference to the default for the current team project   //Get default template var defaultTemplate = buildServer.QueryProcessTemplates(teamProject).Where(p => p.TemplateType == ProcessTemplateType.Default).First(); buildDefinition.Process = defaultTemplate;   There are several build process templates that can be set for the default build process template. Only one of these are required, the ProjectsToBuild parameters which contains the solution(s) and configuration(s) that should be built. To set this info, we use the ProcessParameters property of thhe IBuildDefinition interface. The format of this property is actually just a serialized dictionary (IDictionary<string, object>) that maps a key (parameter name) to a value which can be any kind of object. This is rather messy, but fortunately, there is a helper class called WorkflowHelpers inthe Microsoft.TeamFoundation.Build.Workflow namespace, that simplifies working with this persistence format a bit. The following code shows how to set the BuildSettings information for a build definition: //Set process parameters varprocess = WorkflowHelpers.DeserializeProcessParameters(buildDefinition.ProcessParameters); //Set BuildSettings properties BuildSettings settings = newBuildSettings(); settings.ProjectsToBuild = newStringList("$/pathToProject/project.sln"); settings.PlatformConfigurations = newPlatformConfigurationList(); settings.PlatformConfigurations.Add(newPlatformConfiguration("Any CPU", "Debug")); process.Add("BuildSettings", settings); buildDefinition.ProcessParameters = WorkflowHelpers.SerializeProcessParameters(process); The other build process parameters of a build definition can be set using the same approach   Retention  Policy This one is easy, we just clear the default settings and set our own: buildDefinition.RetentionPolicyList.Clear(); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Succeeded, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Failed, 10, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.Stopped, 1, DeleteOptions.All); buildDefinition.AddRetentionPolicy(BuildReason.Triggered, BuildStatus.PartiallySucceeded, 10, DeleteOptions.All); Save It! And we’re done, lets save the build definition: buildDefinition.Save(); That’s it!

    Read the article

  • DataGridView row is still dirty after committing changes

    - by Ecyrb
    DataGridView.IsCurrentRowDirty remains true after I commit changes to the database. I want to set it to false so it doesn't trigger RowValidating when it loses focus. I have a DataGridView bound to a BindingList<T>. I handle the CellEndEdit event and save changes to the database. After saving those changes I would like DataGridView.IsCurrentRowDirty to be set to true, since all cells in the row are up-to-date; however, it's set to false. This causes problems for me because when the row does lose focus it will trigger RowValidating, which I handle and validate all three cells in. So even though all the cells are valid and none are dirty it will still validate them all. That's a waste. Here's an example of what I have: void dataGridView_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { // Ignore cell if it's not dirty if (dataGridView.isCurrentCellDirty) return; // Validate current cell. } void dataGridView_RowValidating(object sender, DataGridViewCellCancelEventArgs e) { // Ignore Row if it's not dirty if (!dataGridView.IsCurrentRowDirty) return; // Validate all cells in the current row. } void dataGridView_CellEndEdit(object sender, DataGridViewCellEventArgs e) { // Validate all cells in the current row and return if any are invalid. // If they are valid, save changes to the database // This is when I would expect dataGridView.IsCurrentRowDirty to be false. // When this row loses focus it will trigger RowValidating and validate all // cells in this row, which we already did above. } I've read posts that said I could call the form's Validate() method, but that will cause RowValidating to fire, which is what I'm trying to avoid. Any idea how I can set DataGridView.IsCurrentRowDirty to true? Or maybe a way to prevent RowValidating from unnecessarily validating all the cells?

    Read the article

  • Properly registering JavaScript and CSS in MVC 2 Editor Templates

    - by Jaxidian
    How do I properly register javascript blocks in an ASP.NET MVC 2 (RTM) Editor template? The specific scenario I'm in is that I want to use Dynarch JSCal2 DateTimePicker for my standard datetime picker, but this question is in general to any reusable javascript package. I have my template working properly now but it has my JS and CSS includes in my master page and I would rather only include these things if I actually need them: <link rel="stylesheet" type="text/css" href="../../Content/JSCal2-1.7/jscal2.css" /> <link rel="stylesheet" type="text/css" href="../../Content/JSCal2-1.7/border-radius.css" /> <script type="text/javascript" src="../../Scripts/JSCal2-1.7/jscal2.js"></script> <script type="text/javascript" src="../../Scripts/JSCal2-1.7/lang/en.js"></script> So obviously I could just put these lines into my template, but then if I have a screen that has 5 DateTimePickers, then this content would be duplicated 5 times which wouldn't be ideal. Anyways, I still want my View's Template to trigger this code being put into the <head> of my page. While it is completely unrelated to my asking this question, I thought I'd share my template on here (so far) in case it's useful in any way: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<DateTime>" %> <%= Html.TextBoxFor(model => Model) %> <input type="button" id="<%= ViewData.TemplateInfo.GetFullHtmlFieldId("cal-trigger") %>" value="..." /> <script type="text/javascript"> var <%= ViewData.TemplateInfo.GetFullHtmlFieldId("cal") %> = Calendar.setup({ trigger : "<%= ViewData.TemplateInfo.GetFullHtmlFieldId(string.Empty) %>", inputField : "<%= ViewData.TemplateInfo.GetFullHtmlFieldId(string.Empty) %>", onSelect : function() { this.hide(); }, showTime : 12, selectionType : Calendar.SEL_SINGLE, dateFormat : '%o/%e/%Y %l:%M %P' }); </script>

    Read the article

  • What is the scope of CONTEXT_INFO in SQL Server?

    - by JasonS
    I am using CONTEXT_INFO to pass a username to a delete trigger for the purposes of an audit/history table. I'm trying to understand the scope of CONTEXT_INFO and if I am creating a potential race condition. Each of my database tables has a stored proc to handle deletes. The delete stored proc takes userId as an parameter, and sets CONTEXT_INFO to the userId. My delete trigger then grabs the CONTEXT_INFO and uses that to update an audit table that indicates who deleted the row(s). The question is, if two deletes sprocs from different users are executing at the same time, can CONTEXT_INFO set in one of the sprocs be consumed by the trigger fired by the other sproc? I've seen this article http://msdn.microsoft.com/en-us/library/ms189252.aspx but I'm not clear on the scope of sessions and batches in SQL Server which is key to the article being helpful! I'd post code, but short on time at the moment. I'll edit later if this isn't clear enough. Thanks in advance for any help.

    Read the article

  • Using a Case statement within the values section of an Insert statement

    - by mattgcon
    Please forgive my ignorance and poor SQL programming skills but I am normally a basic SQL developer. I need to create a trigger off the insertion of data in one table to insert different data into another table. Within this trigger I need to insert certain data into the new table based upon values within the newly inserted data from the original table. I am totally confused on this. i thought I would be creative and use a case statement within teh Values section but it is not working. Can anyone please help me on this? (below is the code for the trigger that I have as of now) INSERT INTO dbo.WebOnlineUserPeopleDashboard ( ONLINE_USERACCOUNT_ID, ONLINE_ROOMS_DIRECTORY, ONLINE_ROOMS_LIST, ONLINE_ROOMS_PLACEMENT, ONLINE_ROOMS_MANAGEMENT, ONLINE_MAILINGLIST_DIRECTORY, ONLINE_MAILINGLIST_LIST, ONLINE_MAILINGLIST_MEMBERS, ONLINE_MAILINGLIST_MANAGER, ONLINE_PEOPLESEARCH_DIRECTORY ) VALUES IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 1 BEGIN SELECT ONLINE_USERACCOUNT_ID, 1, 1, 1, 1, 1, 1, 1, 1, 1 FROM INSERTED END ELSE IF (SELECT ONLINE_PEOPLE_FULL_ACCESS FROM INSERTED) = 0 BEGIN SELECT ONLINE_USERACCOUNT_ID, 0, 0, 0, 0, 0, 0, 0, 0, 0 FROM INSERTED END ELSE BEGIN SELECT ONLINE_USERACCOUNT_ID, CASE --DIRECTORY WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 1 OR ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_PLACEMENT_ADD = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_UPDATE = 0 AND ONLINE_PEOPLE_ROOMS_PLACEMENT_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 1 THEN 1 WHEN ONLINE_PEOPLE_ROOMS_MANAGEMENT_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 1 OR ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_FULL_ACCESS = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_VIEW = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_ADD = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_UPDATE = 0 AND ONLINE_PEOPLE_MAILING_LISTS_MEMBERS_DELETE = 0 THEN 0 END, CASE WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 1 WHEN ONLINE_PEOPLE_MAILING_LISTS_ADD = 1 OR ONLINE_PEOPLE_MAILING_LISTS_UPDATE = 1 OR ONLINE_PEOPLE_MAILING_LISTS_DELETE = 1 THEN 0 END, CASE WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 1 THEN 1 WHEN ONLINE_PEOPLE_PEOPLE_SEARCH = 0 THEN 0 END FROM INSERTED END END

    Read the article

  • XAML itemscontrol visibility

    - by Sam
    Hello, I have a ItemsControl in my XAML code. When some trigger occur i want to collapse the full itemsControl, so all the elements. <ItemsControl Name="VideoViewControl" ItemsSource="{Binding Videos}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <WrapPanel ItemHeight="120" ItemWidth="160" Name="wrapPanel1"/> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate> <views:VideoInMenuView /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> The trigger: <DataTrigger Value="videos" Binding="{Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type UserControl}, AncestorLevel=1}, Path=DataContext.VideosEnable}"> <Setter Property="ScrollViewer.Visibility" Value="Visible" TargetName="test1" /> <Setter Property="ScrollViewer.Visibility" Value="Collapsed" TargetName="test2" /> <Setter Property="WrapPanel.Visibility" Value="Collapsed" TargetName="wrapPanel1" /> </DataTrigger> When I add the last setter the program crashes. Without this last setter it works fine but no visibility change.... What is wrong with this code? What is the write method to collapse all the elements of a ItemsControl with a trigger?

    Read the article

  • WPF DataTemplate - Overlay

    - by David Ward
    I have a class that I need to provide the datatemplate for. Currently I have two datatemplates, one for when Enabled == true and one for when Enabled == false. The datatemplate for the class is actually the one below which changes the template used based on a trigger: <DataTemplate x:Key="ActionNodeTemplateSelector"> <ContentPresenter Content="{Binding}" Name="cp" /> <DataTemplate.Triggers> <DataTrigger Binding="{Binding Enabled}" Value="True"> <Setter TargetName="cp" Property="ContentTemplate" Value="{StaticResource ActionNodeTemplate}" /> </DataTrigger> <DataTrigger Binding="{Binding Enabled}" Value="False"> <Setter TargetName="cp" Property="ContentTemplate" Value="{StaticResource ActionNodeDisabledTemplate}" /> </DataTrigger> </DataTemplate.Triggers> </DataTemplate> This all works well. However, now I want to also display an image overlay if the "Completed" property is true and a different image if it is incomplete. I could easily carry on using my approach to trigger on the Completed property as well but this would double the number of templates and feels wrong. Is there a way that I could have a trigger on my DataTemplate that will allow me to overlay the correct image over the existing template which is based on the Enabled property?

    Read the article

  • Jquery event to fire in ajax loaded content, IE & FF problem

    - by Sylph
    Hello, I'm trying to trigger onclick events in an ajax loaded content but it doesn't seem to work. Here is my code :- <li><a href="#" id="subtopic">Title</a></li> <script type="text/javascript"> $(function() { $("#subtopic").click(function() { url: test.html ,success: function(data) { $('#result').html(data); $(".filetree").treeview(); $('#result').click(); $('#result').trigger("update"); } }); }); }); </script> In my loaded content, :- <a href="#" id="addSubTopic">Click here</a> <script type="text/javascript"> $(document).ready(function() { alert("000"); }); $(function() { $("#addSubTopic").click(function() { alert("0"); $.ajax({ url: url, success: function(data) { $("#listRes").append(data); } }); }); }); </script> In IE, it works fine, the click works and I can get the "alert" but in Firefox, the whole thing just go dead. However, my jquery plugin for $(".filetree").treeview(); works fine for both IE and FF. Any advice? I have tried using .live, .trigger("update"). Thanks in advance

    Read the article

  • Multhreading in Java

    - by Vijay Selvaraj
    I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing. The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure. Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes? Any suggestion will be helpful.

    Read the article

  • Jquery Using Jeditable and activating on click

    - by BandonRandon
    I found this to be almost exactly what I'm trying to do. I'm using Jeditable I can get the default setup to work. I've also been able to get the code in the forum above to work. I believe my problem is that because I'm using a table I need to so something else to select the previous element. Here is my HTML <table> <tr> <td width="5%"><input class="cat_checkbox" type="checkbox" name='delete_cat[]' value='<?php echo("$cat_data[cat_id]");?>' /></td> <td width="90%" class="edit_cat_title" id='unique_id'>Category</td> <td width="5%"><a href="#" class="edit_cat_title_trigger"><img src="images/edit.gif" border="0"></a></td> </tr> </table> and here is my JQuery: //modify title content on the fly $('.edit_cat_title').editable('action.php', { name : 'cat_title', indicator : 'Saving...', submit : 'OK', cancel : 'Cancel', tooltip:'click to edit', event : 'edit' }); //trigger with the click of the edit image $(".edit_cat_title_trigger").bind("click", function() { $(this).prev().trigger("edit_cat_title"); }); I know I probably should be able to figure it out and I know all i have to do is change $(this).prev().trigger("edit_cat_title"); to the right thing but I'm still really new to Jquery. Thanks

    Read the article

  • How to detect when video is buffering?

    - by Leon
    Hi guys, my question today deals with Flash AS3 video buffering. (Streaming or Progressive) I want to be able to detect when the video is being buffered, so I can display some sort of animation letting the user know to wait just a little longer. Currently my video will start up, hold on frame 1 for 3-4 secs then play. Kinda giving the impression that the video is paused or broken :( Update Thanks to iandisme I believe I'm faced in the right direction now. NetStatusEvent from livedocs. It seems to me that the key status to be working in is "NetStream.Buffer.Empty" so I added some code in there to see if this would trigger my animation or a trace statement. No luck yet, however when the Buffer is full it will trigger my code :/ Maybe my video is always somewhere between Buffer.Empty and Buffer.Full that's why it won't trigger any code when I test case for Buffer.Empty? Current Code public function netStatusHandler(event:NetStatusEvent):void { // handles net status events switch (event.info.code) { case "NetStream.Buffer.Empty": trace("¤¤¤ Buffering!"); //<- never traces addChild(bufferLoop); //<- doesn't execute break; case "NetStream.Buffer.Full": trace("¤¤¤ FULL!"); //<- trace works here removeChild(bufferLoop); //<- so does any other code break; case "NetStream.Buffer.Flush": trace("¤¤¤ FLUSH!"); //Not sure if this is important break } }

    Read the article

  • How can I bind another DependencyProperty to the IsChecked Property of a CheckBox?

    - by speedmetal
    Here's an example of what I'm trying to accomplish: <Window x:Class="CheckBoxBinding.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <StackPanel> <CheckBox Name="myCheckBox">this</CheckBox> <Grid> <Grid.Resources> <Style TargetType="ListBox"> <Style.Triggers> <Trigger Property="{Binding ElementName=myCheckBox, Path=IsChecked}" Value="True"> <Setter Property="Background" Value="Red" /> </Trigger> </Style.Triggers> </Style> </Grid.Resources> <ListBox> <ListBoxItem>item</ListBoxItem> <ListBoxItem>another</ListBoxItem> </ListBox> </Grid> </StackPanel> </Window> When I try to run it, I get this XamlParseException: A 'Binding' cannot be set on the 'Property' property of type 'Trigger'. A 'Binding' can only be set on a DependencyProperty of a DependencyObject. So how can I bind a property on the ListBox to the IsChecked property of a CheckBox?

    Read the article

  • MySQL default value based on view

    - by Jake
    Basically I have a bunch of views based on a simple discriminator column (eg. CREATE VIEW tablename AS SELECT * FROM tablename WHERE discrcolumn = "discriminator value"). Upon inserting a new row into this view, it should insert "discriminator value" into discrcolumn. I tried this, but apparently MySQL doesn't figure this out itself, as it throws an error "Field of view viewname underlying table does not have a default value". The discriminator column is set to NOT NULL of course. How do I mend this? Perhaps a pre-insert trigger? UPDATE: Triggers won't work on views, see below comment. Would it work to create a trigger on the table which uses a variable, and set that variable at establishing the connection? For each connection the value of that variable would be the same, but it could differ from other connections. EDIT: This appears to work... Setup: CREATE TRIGGER insert_[tablename] BEFORE INSERT ON [tablename] FOR EACH ROW SET NEW.[discrcolumn] = @variable Runtime: SET @variable = [descrvalue]; INSERT INTO [viewname] ([columnlist]) VALUES ([values]);

    Read the article

  • JQuery error in IE, works with FF. Maybe a problem with live.

    - by olve
    Hello. I have an ASP.net MVC2 application. In wich I'm using JQuery to alter all table rows so I can click anywhere in any row to trigger a click event on a link in the clicked row. The tables is created using MVC's built in partialview ajax. Here is my JQuery script. <script type="text/javascript"> $(document).ready(function () { $('table tr').live('click',function (event) { $('#asplink', this).trigger('click'); }) .live('mouseenter',function (event) { this.bgColor = 'lightblue'; }) .live('mouseleave', function (event) { this.bgColor = 'white'; }); }); </script> And this is the first part of the partial view code that creates the table. <% foreach (var item in Model.JobHeaderData) { %> <tr> <td> <a id="asplink" href="http://localhost/sagstyring/EditJob.asp?JobDataID=<%: item.JobDataId %>&JobNumId=<%: item.JobNumID%>&JobNum=<%: item.JobNumID%>&DepId=1&User_Id=<%:ViewData["UserId"]%>" onclick="window.open(this.href,'Rediger sag <%: item.JobNumID %> ', 'status=0, toolbar=0, location=0, menubar=0, directories=0, resizeable=0, scrollbars=0, width=900, height=700'); return false;">Rediger</a> </td> In firefox this works perfectly. In IE, JQuery crashes when I click on a row. If I debug the page in IE. I get this. Out of stack space In jquery-1.4.1.js line 1822 // Trigger the event, it is assumed that "handle" is a function var handle = jQuery.data( elem, "handle" ); if ( handle ) { handle.apply( elem, data ); } I'm no eagle at javascript, so I'm pretty much stuck.

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • UDP server doesnt accept calls from outside.

    - by rayman
    Hi, ive implement simple udp server on my Android device.(sdk 1.5) it works fine when i am runnning a local client on the phone sends through it trigger to my server. but when i try to get udp call from an outside server to my phone, it doesnt work. already make sure the outside server isnt blocked by firewall and it's sending the udp trigger to the right port, which my phone is listening to. i used natstat on the phone and checked that the phone is realy listening to the it's local ip and the port ive setted it to. here is my code of the server:(on the device) // server will listen to one client try { Thread udpServerThread = new Thread() { @Override public void run() { try { // Retrieve the ServerName InetAddress serverAddr = InetAddress .getByName("localhost"); Log.d("UDP", "S: Connecting..."); // Create new UDP-Socket socket = new DatagramSocket(SERVERPORT,serverAddr); byte[] buf = new byte[17]; // * Prepare a UDP-Packet that can contain the data we // * want to receive DatagramPacket packet = new DatagramPacket(buf, buf.length); Log.d("UDP", "S: Receiving..."); // wait to Receive the UDP-Packet socket.receive(packet); Log.d("UDP", "S: Received: '" + new String(packet.getData()) + "'"); acceptedMsg=new String(packet.getData()); notifyService(acceptedMsg); Log.d("UDP", "S: Done."); } catch (Exception e) { Log.e("UDP", "S: Error", e); } } }; udpServerThread.start(); } catch (Exception E) { Log.e("r",E.getMessage()) ; } so as i said, when i try it with local client(seperate thread) which sends udp trigger it works fine, but when i take this client implementation and put it on an outside real server, after UDP being sent, the phone doesnt respond to it. any idea? thanks, ray.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >