Search Results

Search found 31200 results on 1248 pages for 'field service'.

Page 252/1248 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • Static DataTable or DataSet in a class - bad idea?

    - by Superbest
    I have several instances of a class. Each instance stores data in a common database. So, I thought "I'll make the DataTable table field static, that way every instance can just add/modify rows to its own table field, but all the data will actually be in one place!" However, apparently it's a bad idea to do use static fields, especially if it's databases: Don't Use "Static" in C#? Is this a bad idea? Will I run into problems later on if I use it? This is a small project so I can accept no testing as a compromise if that is the only drawback. The benefit of using a static database is that there can be many objects of type MyClass, but only one table they all talk to, so a static field seems to be an implementation of exactly this, while keeping syntax concise. I don't see why I shouldn't use a static field (although I wouldn't really know) but if I had to, the best alternative I can think of is creating one DataTable, and passing a reference to it when creating each instance of MyClass, perhaps as a constructor parameter. But is this really an improvement? It seems less intuitive than a static field.

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • MS Access Query Criteria Issue

    - by xxl3ww
    Currently I have a MS Access database query that has a field named FedEXDetTotal that totals 9 FedEX charge fields. I have another field that is from our inhouse system called "Total Charge". This is just a normal number field. I have created another Field in this query Diff: [FedEXDetTotal]-[Total Charge] This tells me the difference between the Fedex charge and what we actually charged. Everything works OK with this, but when I try to put the criteria 5 for the Diff field, when I run the query, I get a prompt saying "Enter Parameter Value FedEXDetTotal". Why is Access doing this? How do I get around this? I'm trying to start out with something simple(5), but what I really want is [Forms]![Dis].[txtbox_Diff].

    Read the article

  • Operations on multiple overlapping layers not working

    - by Arun
    Hi I am developing a game in android just like Farmville by Zinga. In that game we have to place elements in the diamond shaped field so the don't overlap each other. Now I did coding for placing the field inside the farm field but I cannot stop the problem of overlapping of the farm field. I Am attaching the code that I have down for all this someone please help me.... try{ if(bm1.getPixel((int)initX,(int)initY)!=0){ if(bm1.getPixel((int)initX,(int)initY+20)!=0){ if(bm1.getPixel((int)initX-20,(int)initY)!=0){ if(bm1.getPixel((int)initX+20,(int)initY)!=0){ if(bm1.getPixel((int)initX,(int)initY-20)!=0){ c.drawBitmap(bm,initX-30,initY-20, paint); } } } } } }catch(Exception e) { Toast.makeText(getContext(), e.toString(), Toast.LENGTH_SHORT); }

    Read the article

  • June 14th Webcast: Using Personalization in Oracle eAM

    - by Oracle_EBS
    ADVISOR WEBCAST: Using Personalization in Oracle eAMPRODUCT FAMILY: Enterprise Asset Management June 14, 2012 at 11 am ET, 9 am MT, 8 am PT Personalization is the ability within an E-Business Suite instance to make changes to the look and behavior of OA Framework-based pages without programming (and, therefore, upgradeable!).TOPICS WILL INCLUDE: Oracle eAM customers have leveraged Personalization to address many of the following use cases: making a non-mandatory field mandatory on a Work Order adding a new field to the header region entering the specifications for a new field disabling a field to prevent user entry adding Asset Description to Create Work Request Page viewing Asset Hierarchy in Create Work Request Page setting Auto Request Material Flag to yes by default A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1

    Read the article

  • Useful WatiN Extension Methods

    - by Steve Wilkes
    I've been doing a fair amount of UI testing using WatiN recently – here’s some extension methods I've found useful. This checks if a WatiN TextField is actually a hidden field. WatiN makes no distinction between text and hidden inputs, so this can come in handy if you render an input sometimes as hidden and sometimes as a visible text field. Note that this doesn't check if an input is visible (I've got another extension method for that in a moment), it checks if it’s hidden. public static bool IsHiddenField(this TextField textField) { if (textField == null || !textField.Exists) { return false; } var textFieldType = textField.GetAttributeValue("type"); return (textFieldType != null) && textFieldType.ToLowerInvariant() == "hidden"; } The next method quickly sets the value of a text field to a given string. By default WatiN types the text you give it into a text field one character at a time which can be necessary if you have behaviour you want to test which is triggered by individual key presses, but which most of time is just painfully slow; this method dumps the text in in one go. Note that if it's not a hidden field then it gives it focus first; this helps trigger validation once the value has been set and focus moves elsewhere. public static void SetText(this TextField textField, string value) { if ((textField == null) || !textField.Exists) { return; } if (!textField.IsHiddenField()) { textField.Focus(); } textField.Value = value; } Finally, here's a method which checks if an Element is currently visible. It does so by walking up the DOM and checking for a Style.Display of 'none' on any element between the one on which the method is invoked, and any of its ancestors. public static bool IsElementVisible(this Element element) { if ((element == null) || !element.Exists) { return false; } while ((element != null) && element.Exists) { if (element.Style.Display.ToLowerInvariant().Contains("none")) { return false; } element = element.Parent; } return true; } Hope they come in handy

    Read the article

  • How to link specific ports to specific domains with Apache virtual hosts?

    - by theJoe
    We have a forward-facing linux box running Apache HTTP server that is acting as a reverse proxy for several back-end servers. The servers are accessed through specific domain names and ports and are set up as virtual hosts within Apache as such: Listen 8001 Listen 8002 <Virtualhost *:8001> ServerName service.one.mycompany.com ProxyPass / http://internal.one.mycompany.com:8001/ ProxyPassReverse / http://internal.one.mycompany.com:8001/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> <Virtualhost *:8002> ServerName service.two.mycompany.com ProxyPass / http://internal.two.mycompany.com:8002/ ProxyPassReverse / http://internal.two.mycompany.com:8002/ RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] </Virtualhost> The proxy server has only one IP address, and both domains are pointing to it. Accessing internal.one via service.one works fine, as does accessing internal.two via service.two. Now the problem is that Apache does not take the requesting domain into account when accessing the virtual hosts. What I mean is that both domains work for both ports: requests for service.one:8002 proxies to internal.two:8002, and requests for service.two:8001 proxies to internal.one:8001, where ideally both these requests should be denied. I can get around this by creating more virtual hosts that explicitly deny these requests: NameVirtualHost *:8001 NameVirtualHost *:8002 <Virtualhost *:8001> ServerName service.two.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> <Virtualhost *:8002> ServerName service.one.mycompany.com Redirect permanent / http://errorpage.mycompany.com/ </Virtualhost> But this is not an ideal solution, since we plan to add more services to the proxy, and each new port would need to be explicitly denied on all the other domains, and each new domain would need to be explicitly denied on all ports it is not utilizing. As we add more services, the number of virtual hosts can get out of hand quickly. My question, then, is whether there is a better way? Can we explicitly tie specific ports to specific domains in a virtual host so that only that domain-port combination is processed, and all other combinations are not? Things I’ve tried: Adding NameVirtualHost *:8001, etc. without the additional virtual hosts. Setting ProxyRequests On and Off, as well as ProxyPreserveHost On and Off Adding the server name or IP address to the virtual host header, e.g. <VirtualHost service.one.mycompany.com:8001> Using the <proxy> directive inside the virtual host directive. Lots and lots of googling. The proxy server is running CentOS 6.2 64-bit, Apache HTTPD server 2.2.15. As mentioned, the proxy server has only one IP address, and all the domains we are using are pointing to it.

    Read the article

  • Shell Script to Start Mysql Server if not running

    - by user103373
    I have written a shell script to start mysql server & send a mail to admin user if it's restarted via shell script. What i am facing an issue if I run this shell script on terminal it's work perfectly & If same script runs via cronjob it's only sending the mail to the user & problem remains same. Is this problem relates to permission & how can i resolve it. Shell Script-------- #!/bin/bash EMAIL="[email protected]" SERVICE='mysql' if ps ax | grep -v grep | grep $SERVICE > /dev/null then echo "$SERVICE service running, everything is fine" else echo "$SERVICE is not running" /etc/init.d/mysql start cat <<EOF | msmtp -a gmail $EMAIL Subject: "Alert (Test Server) : Mysql Service is not running (Manually Restarted)" Mysql Server Restarted at: `date` EOF EXIT I am using msmtp for sending mail to the user on ubuntu 12.04 Server.

    Read the article

  • What is the/Is there a right way to tell management that our code sucks?

    - by Azkar
    Our code is bad. It might not have always been considered bad, but it is bad and is only going downhill. I started fresh out of college less than a year ago, and many of the things in our code puzzle me beyond belief. At first I figured that as the new guy I should keep my mouth shut until I learned a little more about our code base, but I've seen plenty to know that it's bad. Some of the highlights: We still use frames (try getting something out of a querystring, almost impossible) VBScript Source Safe We 'use' .NET - by that I mean we have .net wrappers that call COM DLLs making it almost impossible to debug easily Everything is basically one giant function Code is not maintainable. Each page has multiple files that are created every time a new page is made. The main page basically does Response.Write() a bunch of times to render the HTML (runat="server"? no way). After that there can be a lot of logic on the client side (VBScript), and finally the page submits to itself (often time storing many things in hidden fields) where it then posts to a processing page which can do things such as save the data to the database. The specifications we get are laughable. Often times they call for things like "auto-populate field X with either field Y or field Z" with no indication of when to choose field Y or field Z. I'm sure some of this is a result of not being employed at a software company, but I feel as if people writing software should at least care about the quality of their code. I can't even imagine that if I were to bring something up that anything would be done soon, as there is a large deadline looming, but we are continuing to write bad code and use bad practices. What can I do? How do I even bring these issues up? 75% of my team agree with me and have brought up these issues in the past, yet nothing gets changed.

    Read the article

  • Storing images in file system and returning URLs or virtually resizing and returning byte arrays?

    - by ismaelf
    I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?

    Read the article

  • Need some critique on .NET/WCF SOA architecture plan

    - by user998101
    I am working on a refactoring of some services and would appreciate some critique on my general approach. I am working with three back-end data systems and need to expose an authenticated front-end API over http binding, JSON, and REST for internal apps as well as 3rd party integration. I've got a rough idea below that's a hybrid of what I have and where I intend to wind up. I intend to build guidance extensions to support this architecture so that devs can build this out quickly. Here's the current idea for our structure: Front-end WCF routing service (spread across multiple IIS servers via hardware load balancer) Load balancing of services behind routing is handled within routing service, probably round-robin One of the services will be a token Multiple bindings per-service exposed to address JSON, REST, and whatever else comes up later All in/out is handled via POCO DTOs Use unity to scan for what services are available and expose them The front-end services behind the routing service do nothing more than expose the API and do conversion of DTO<-Entity Unity inject service implementation to allow mocking automapper for DTO/Entity conversion Invoke WF services where response required immediately Queue to ESB for async WF -- ESB will invoke WF later Business logic WF layer Expose same api as front-end services Implement business logic Wrap transaction context where needed Call out to composite/atomic services Composite/Atomic Services Exposed as WCF One service per back-end system Standard atomic CRUD operations plus composite operations Supports transaction context The questions I have are: Are the separation of concerns outlined above beneficial? Current thought is each layer below is its own project, except the backend stuff, where each system gets one project. The project has a servicehost and all the services are under a services folder. Interfaces live in a separate project at each layer. DTO and Entities are in two separate projects under a shared folder. I am currently planning to build dedicated services for shared functionality such as logging and overload things like tracelistener to call those services. Is this a valid approach? Any other suggestions/comments?

    Read the article

  • Uncompiled WCF on IIS7: The type could not be found

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • Deploying WCF Tutorial App on IIS7: "The type could not be found"

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. Here is a picture of what I see in IIS Manager, in case my words don't do it justice: This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Interestingly enough, if I click on View Applications, it seems like the virtual directory is an application (see image below)... although, as per the error message above, it doesn't work. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config <dir: App_Code> Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • XSLT: a variation on the pagination problem

    - by MarcoS
    I must transform some XML data into a paginated list of fields. Here is an example. Input XML: <?xml version="1.0" encoding="UTF-8"?> <data> <books> <book title="t0"/> <book title="t1"/> <book title="t2"/> <book title="t3"/> <book title="t4"/> </books> <library name="my library"/> </data> Desired output: <?xml version="1.0" encoding="UTF-8"?> <pages> <page number="1"> <field name="library_name" value="my library"/> <field name="book_1" value="t0"/> <field name="book_2" value="t1"/> </page> <page number="2"> <field name="book_1" value="t2"/> <field name="book_2" value="t3"/> </page> <page number="3"> <field name="book_1" value="t4"/> </page> </pages> In the above example I assume that I want at most 2 fields named book_n (with n ranging between 1 and 2) per page. Tags <page> must have an attribute number. Finally, the field named library_name must appear only the first <page>. Here is my current solution using XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0" exclude-result-prefixes="trx xs"> <xsl:output method="xml" indent="yes" omit-xml-declaration="no" /> <xsl:variable name="max" select="2"/> <xsl:template match="//books"> <xsl:for-each-group select="book" group-ending-with="*[position() mod $max = 0]"> <xsl:variable name="pageNum" select="position()"/> <page number="{$pageNum}"> <xsl:for-each select="current-group()"> <xsl:variable name="idx" select="if (position() mod $max = 0) then $max else position() mod $max"/> <field value="{@title}"> <xsl:attribute name="name">book_<xsl:value-of select="$idx"/> </xsl:attribute> </field> </xsl:for-each> <xsl:if test="$pageNum = 1"> <xsl:call-template name="templateFor_library"/> </xsl:if> </page> </xsl:for-each-group> </xsl:template> <xsl:template name="templateFor_library"> <xsl:for-each select="//library"> <field name="library_name" value="{@name}" /> </xsl:for-each> </xsl:template> </xsl:stylesheet> Is there a better/simpler way to perform this transformation?

    Read the article

  • Modify PHP Search Script to Handle Multiple Entries For a Single Input

    - by Thomas
    I need to modify a php search script so that it can handle multiple entries for a single field. The search engine is designed for a real estate website. The current search form allows users to search for houses by selecting a single neighborhood from a dropdown menu. Instead of a dropdown menu, I would like to use a list of checkboxes so that the the user can search for houses in multiple neighborhoods at one time. I have converted all of the dropdown menu items into checkboxes on the HTML side but the PHP script only searches for houses in the last checkbox selected. For example, if I selected: 'Dallas' 'Boston' 'New York' the search engine will only search for houses in New York. Im new to PHP, so I am a little at a loss as to how to modify this script to handle the behavior I have described: <?php require_once(dirname(__FILE__).'/extra_search_fields.php'); //Add Widget for configurable search. add_action('plugins_loaded',array('DB_CustomSearch_Widget','init')); class DB_CustomSearch_Widget extends DB_Search_Widget { function DB_CustomSearch_Widget($params=array()){ DB_CustomSearch_Widget::__construct($params); } function __construct($params=array()){ $this->loadTranslations(); parent::__construct(__('Custom Fields ','wp-custom-fields-search'),$params); add_action('admin_print_scripts', array(&$this,'print_admin_scripts'), 90); add_action('admin_menu', array(&$this,'plugin_menu'), 90); add_filter('the_content', array(&$this,'process_tag'),9); add_shortcode( 'wp-custom-fields-search', array(&$this,'process_shortcode') ); wp_enqueue_script('jquery'); if(version_compare("2.7",$GLOBALS['wp_version'])>0) wp_enqueue_script('dimensions'); } function init(){ global $CustomSearchFieldStatic; $CustomSearchFieldStatic['Object'] = new DB_CustomSearch_Widget(); $CustomSearchFieldStatic['Object']->ensureUpToDate(); } function currentVersion(){ return "0.3.16"; } function ensureUpToDate(){ $version = $this->getConfig('version'); $latest = $this->currentVersion(); if($version<$latest) $this->upgrade($version,$latest); } function upgrade($current,$target){ $options = $this->getConfig(); if(version_compare($current,"0.3")<0){ $config = $this->getDefaultConfig(); $config['name'] = __('Default Preset','wp-custom-fields-search'); $options['preset-default'] = $config; } $options['version']=$target; update_option($this->id,$options); } function getInputs($params = false,$visitedPresets=array()){ if(is_array($params)){ $id = $params['widget_id']; } else { $id = $params; } if($visitedPresets[$id]) return array(); $visitedPresets[$id]=true; global $CustomSearchFieldStatic; if(!$CustomSearchFieldStatic['Inputs'][$id]){ $config = $this->getConfig($id); $inputs = array(); if($config['preset']) $inputs = $this->getInputs($config['preset'],$visitedPresets); $nonFields = $this->getNonInputFields(); if($config) foreach($config as $k=>$v){ if(in_array($k,$nonFields)) continue; if(!(class_exists($v['input']) && class_exists($v['comparison']) && class_exists($v['joiner']))) { continue; } $inputs[] = new CustomSearchField($v); } foreach($inputs as $k=>$v){ $inputs[$k]->setIndex($k); } $CustomSearchFieldStatic['Inputs'][$id]=$inputs; } return $CustomSearchFieldStatic['Inputs'][$id]; } function getTitle($params){ $config = $this->getConfig($params['widget_id']); return $config['name']; } function form_processPost($post,$old){ unset($post['###TEMPLATE_ID###']); if(!$post) $post=array('exists'=>1); return $post; } function getDefaultConfig(){ return array('name'=>'Site Search', 1=>array( 'label'=>__('Key Words','wp-custom-fields-search'), 'input'=>'TextField', 'comparison'=>'WordsLikeComparison', 'joiner'=>'PostDataJoiner', 'name'=>'all' ), 2=>array( 'label'=>__('Category','wp-custom-fields-search'), 'input'=>'DropDownField', 'comparison'=>'EqualComparison', 'joiner'=>'CategoryJoiner' ), ); } function form_outputForm($values,$pref){ $defaults=$this->getDefaultConfig(); $prefId = preg_replace('/^.*\[([^]]*)\]$/','\\1',$pref); $this->form_existsInput($pref); $rand = rand(); ?> <div id='config-template-<?php echo $prefId?>' style='display: none;'> <?php $templateDefaults = $defaults[1]; $templateDefaults['label'] = 'Field ###TEMPLATE_ID###'; echo $this->singleFieldHTML($pref,'###TEMPLATE_ID###',$templateDefaults); ?> </div> <?php foreach($this->getClasses('input') as $class=>$desc) { if(class_exists($class)) $form = new $class(); else $form = false; if(compat_method_exists($form,'getConfigForm')){ if($form = $form->getConfigForm($pref.'[###TEMPLATE_ID###]',array('name'=>'###TEMPLATE_NAME###'))){ ?> <div id='config-input-templates-<?php echo $class?>-<?php echo $prefId?>' style='display: none;'> <?php echo $form?> </div> <?php } } } ?> <div id='config-form-<?php echo $prefId?>'> <?php if(!$values) $values = $defaults; $maxId=0; $presets = $this->getPresets(); array_unshift($presets,__('NONE','wp-custom-fields-search')); ?> <div class='searchform-name-wrapper'><label for='<?php echo $prefId?>[name]'><?php echo __('Search Title','wp-custom-fields-search')?></label><input type='text' class='form-title-input' id='<?php echo $prefId?>[name]' name='<?php echo $pref?>[name]' value='<?php echo $values['name']?>'/></div> <div class='searchform-preset-wrapper'><label for='<?php echo $prefId?>[preset]'><?php echo __('Use Preset','wp-custom-fields-search')?></label> <?php $dd = new AdminDropDown($pref."[preset]",$values['preset'],$presets); echo $dd->getInput()."</div>"; $nonFields = $this->getNonInputFields(); foreach($values as $id => $val){ $maxId = max($id,$maxId); if(in_array($id,$nonFields)) continue; echo "<div id='config-form-$prefId-$id'>".$this->singleFieldHTML($pref,$id,$val)."</div>"; } ?> </div> <br/><a href='#' onClick="return CustomSearch.get('<?php echo $prefId?>').add();"><?php echo __('Add Field','wp-custom-fields-search')?></a> <script type='text/javascript'> CustomSearch.create('<?php echo $prefId?>','<?php echo $maxId?>'); <?php foreach($this->getClasses('joiner') as $joinerClass=>$desc){ if(compat_method_exists($joinerClass,'getSuggestedFields')){ $options = eval("return $joinerClass::getSuggestedFields();"); $str = ''; foreach($options as $i=>$v){ $k=$i; if(is_numeric($k)) $k=$v; $options[$i] = json_encode(array('id'=>$k,'name'=>$v)); } $str = '['.join(',',$options).']'; echo "CustomSearch.setOptionsFor('$joinerClass',".$str.");\n"; }elseif(eval("return $joinerClass::needsField();")){ echo "CustomSearch.setOptionsFor('$joinerClass',[]);\n"; } } ?> </script> <?php } function getNonInputFields(){ return array('exists','name','preset','version'); } function singleFieldHTML($pref,$id,$values){ $prefId = preg_replace('/^.*\[([^]]*)\]$/','\\1',$pref); $pref = $pref."[$id]"; $htmlId = $pref."[exists]"; $output = "<input type='hidden' name='$htmlId' value='1'/>"; $titles="<th>".__('Label','wp-custom-fields-search')."</th>"; $inputs="<td><input type='text' name='$pref"."[label]' value='$values[label]' class='form-field-title'/></td><td><a href='#' onClick='return CustomSearch.get(\"$prefId\").toggleOptions(\"$id\");'>".__('Show/Hide Config','wp-custom-fields-search')."</a></td>"; $output.="<table class='form-field-table'><tr>$titles</tr><tr>$inputs</tr></table>"; $output.="<div id='form-field-advancedoptions-$prefId-$id' style='display: none'>"; $inputs='';$titles=''; $titles="<th>".__('Data Field','wp-custom-fields-search')."</th>"; $inputs="<td><div id='form-field-dbname-$prefId-$id' class='form-field-title-div'><input type='text' name='$pref"."[name]' value='$values[name]' class='form-field-title'/></div></td>"; $count=1; foreach(array('joiner'=>__('Data Type','wp-custom-fields-search'),'comparison'=>__('Compare','wp-custom-fields-search'),'input'=>__('Widget','wp-custom-fields-search')) as $k=>$v){ $dd = new AdminDropDown($pref."[$k]",$values[$k],$this->getClasses($k),array('onChange'=>'CustomSearch.get("'.$prefId.'").updateOptions("'.$id.'","'.$k.'")','css_class'=>"wpcfs-$k")); $titles="<th>".$v."</th>".$titles; $inputs="<td>".$dd->getInput()."</td>".$inputs; if(++$count==2){ $output.="<table class='form-field-table form-class-$k'><tr>$titles</tr><tr>$inputs</tr></table>"; $count=0; $inputs = $titles=''; } } if($titles){ $output.="<table class='form-field-table'><tr>$titles</tr><tr>$inputs</tr></table>"; $inputs = $titles=''; } $titles.="<th>".__('Numeric','wp-custom-fields-search')."</th><th>".__('Widget Config','wp-custom-fields-search')."</th>"; $inputs.="<td><input type='checkbox' ".($values['numeric']?"checked='true'":"")." name='$pref"."[numeric]'/></td>"; if(class_exists($widgetClass = $values['input'])){ $widget = new $widgetClass(); if(compat_method_exists($widget,'getConfigForm')) $widgetConfig=$widget->getConfigForm($pref,$values); } $inputs.="<td><div id='$this->id"."-$prefId"."-$id"."-widget-config'>$widgetConfig</div></td>"; $output.="<table class='form-field-table'><tr>$titles</tr><tr>$inputs</tr></table>"; $output.="</div>"; $output.="<a href='#' onClick=\"return CustomSearch.get('$prefId').remove('$id');\">Remove Field</a>"; return "<div class='field-wrapper'>$output</div>"; } function getRootURL(){ return WP_CONTENT_URL .'/plugins/' . dirname(plugin_basename(__FILE__) ) . '/'; } function print_admin_scripts($params){ $jsRoot = $this->getRootURL().'js'; $cssRoot = $this->getRootURL().'css'; $scripts = array('Class.js','CustomSearch.js','flexbox/jquery.flexbox.js'); foreach($scripts as $file){ echo "<script src='$jsRoot/$file' ></script>"; } echo "<link rel='stylesheet' href='$cssRoot/admin.css' >"; echo "<link rel='stylesheet' href='$jsRoot/flexbox/jquery.flexbox.css' >"; } function getJoiners(){ return $this->getClasses('joiner'); } function getComparisons(){ return $this->getClasses('comparison'); } function getInputTypes(){ return $this->getClasses('input'); } function getClasses($type){ global $CustomSearchFieldStatic; if(!$CustomSearchFieldStatic['Types']){ $CustomSearchFieldStatic['Types'] = array( "joiner"=>array( "PostDataJoiner" =>__( "Post Field",'wp-custom-fields-search'), "CustomFieldJoiner" =>__( "Custom Field",'wp-custom-fields-search'), "CategoryJoiner" =>__( "Category",'wp-custom-fields-search'), "TagJoiner" =>__( "Tag",'wp-custom-fields-search'), "PostTypeJoiner" =>__( "Post Type",'wp-custom-fields-search'), ), "input"=>array( "TextField" =>__( "Text Input",'wp-custom-fields-search'), "DropDownField" =>__( "Drop Down",'wp-custom-fields-search'), "RadioButtonField" =>__( "Radio Button",'wp-custom-fields-search'), "HiddenField" =>__( "Hidden Constant",'wp-custom-fields-search'), ), "comparison"=>array( "EqualComparison" =>__( "Equals",'wp-custom-fields-search'), "LikeComparison" =>__( "Phrase In",'wp-custom-fields-search'), "WordsLikeComparison" =>__( "Words In",'wp-custom-fields-search'), "LessThanComparison" =>__( "Less Than",'wp-custom-fields-search'), "MoreThanComparison" =>__( "More Than",'wp-custom-fields-search'), "AtMostComparison" =>__( "At Most",'wp-custom-fields-search'), "AtLeastComparison" =>__( "At Least",'wp-custom-fields-search'), "RangeComparison" =>__( "Range",'wp-custom-fields-search'), //TODO: Make this work... // "NotEqualComparison" =>__( "Not Equal To",'wp-custom-fields-search'), ) ); $CustomSearchFieldStatic['Types'] = apply_filters('custom_search_get_classes',$CustomSearchFieldStatic['Types']); } return $CustomSearchFieldStatic['Types'][$type]; } function plugin_menu(){ add_options_page('Form Presets','WP Custom Fields Search',8,__FILE__,array(&$this,'presets_form')); } function getPresets(){ $presets = array(); foreach(array_keys($config = $this->getConfig()) as $key){ if(strpos($key,'preset-')===0) { $presets[$key] = $key; if($name = $config[$key]['name']) $presets[$key]=$name; } } return $presets; } function presets_form(){ $presets=$this->getPresets(); if(!$preset = $_REQUEST['selected-preset']){ $preset = 'preset-default'; } if(!$presets[$preset]){ $defaults = $this->getDefaultConfig(); $options = $this->getConfig(); $options[$preset] = $defaults; if($n = $_POST[$this->id][$preset]['name']) $options[$preset]['name'] = $n; elseif($preset=='preset-default') $options[$preset]['name'] = 'Default'; else{ list($junk,$id) = explode("-",$preset); $options[$preset]['name'] = 'New Preset '.$id; } update_option($this->id,$options); $presets[$preset] = $options[$preset]['name']; } if($_POST['delete']){ check_admin_referer($this->id.'-editpreset-'.$preset); $options = $this->getConfig(); unset($options[$preset]); unset($presets[$preset]); update_option($this->id,$options); list($preset,$name) = each($presets); } $index = 1; while($presets["preset-$index"]) $index++; $presets["preset-$index"] = __('New Preset','wp-custom-fields-search'); $linkBase = $_SERVER['REQUEST_URI']; $linkBase = preg_replace("/&?selected-preset=[^&]*(&|$)/",'',$linkBase); foreach($presets as $key=>$name){ $config = $this->getConfig($key); if($config && $config['name']) $name=$config['name']; if(($n = $_POST[$this->id][$key]['name'])&&(!$_POST['delete'])) $name = $n; $presets[$key]=$name; } $plugin=&$this; ob_start(); wp_nonce_field($this->id.'-editpreset-'.$preset); $hidden = ob_get_contents(); $hidden.="<input type='hidden' name='selected-preset' value='$preset'>"; $shouldSave = $_POST['selected-preset'] && !$_POST['delete'] && check_admin_referer($this->id.'-editpreset-'.$preset); ob_end_clean(); include(dirname(__FILE__).'/templates/options.php'); } function process_tag($content){ $regex = '/\[\s*wp-custom-fields-search\s+(?:([^\]=]+(?:\s+.*)?))?\]/'; return preg_replace_callback($regex, array(&$this, 'generate_from_tag'), $content); } function process_shortcode($atts,$content){ return $this->generate_from_tag(array("",$atts['preset'])); } function generate_from_tag($reMatches){ global $CustomSearchFieldStatic; ob_start(); $preset=$reMatches[1]; if(!$preset) $preset = 'default'; wp_custom_fields_search($preset); $form = ob_get_contents(); ob_end_clean(); return $form; } } global $CustomSearchFieldStatic; $CustomSearchFieldStatic['Inputs'] = array(); $CustomSearchFieldStatic['Types'] = array(); class AdminDropDown extends DropDownField { function AdminDropDown($name,$value,$options,$params=array()){ AdminDropDown::__construct($name,$value,$options,$params); } function __construct($name,$value,$options,$params=array()){ $params['options'] = $options; $params['id'] = $params['name']; parent::__construct($params); $this->name = $name; $this->value = $value; } function getHTMLName(){ return $this->name; } function getValue(){ return $this->value; } function getInput(){ return parent::getInput($this->name,null); } } if (!function_exists('json_encode')) { function json_encode($a=false) { if (is_null($a)) return 'null'; if ($a === false) return 'false'; if ($a === true) return 'true'; if (is_scalar($a)) { if (is_float($a)) { // Always use "." for floats. return floatval(str_replace(",", ".", strval($a))); } if (is_string($a)) { static $jsonReplaces = array(array("\\", "/", "\n", "\t", "\r", "\b", "\f", '"'), array('\\\\', '\\/', '\\n', '\\t', '\\r', '\\b', '\\f', '\"')); return '"' . str_replace($jsonReplaces[0], $jsonReplaces[1], $a) . '"'; } else return $a; } $isList = true; for ($i = 0, reset($a); $i < count($a); $i++, next($a)) { if (key($a) !== $i) { $isList = false; break; } } $result = array(); if ($isList) { foreach ($a as $v) $result[] = json_encode($v); return '[' . join(',', $result) . ']'; } else { foreach ($a as $k => $v) $result[] = json_encode($k).':'.json_encode($v); return '{' . join(',', $result) . '}'; } } } function wp_custom_fields_search($presetName='default'){ global $CustomSearchFieldStatic; if(strpos($presetName,'preset-')!==0) $presetName="preset-$presetName"; $CustomSearchFieldStatic['Object']->renderWidget(array('widget_id'=>$presetName,'noTitle'=>true),array('number'=>$presetName)); } function compat_method_exists($class,$method){ return method_exists($class,$method) || in_array(strtolower($method),get_class_methods($class)); }

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • check/Uncheck multiple checkboxes together + j2me + Blackberry

    - by SWATI
    if i click on any checkbox all previous checkboxes must get checked "my logic works" if i uncheck a checkbox then all checkboxes after it must get unchecked "how to do that" MyLogic works for storm but not for other models what to do well what i want to do is i have 5 checkboxes class myscreen { chk_service = new CheckboxField[5]; chk_service[0]= new CheckboxField("1",true) chk_service[1]= new CheckboxField("2",false) chk_service[2]= new CheckboxField("3",false) chk_service[3]= new CheckboxField("4",false) chk_service[4]= new CheckboxField("5",false) CheckboxFieldChangeListener obj = new CheckboxFieldChangeListener(chk_service); chk_service[0].setChangeListener(obj); chk_service[1].setChangeListener(obj); chk_service[2].setChangeListener(obj); chk_service[3].setChangeListener(obj); chk_service[4].setChangeListener(obj); hm4 = new HorizontalFieldManager(); hm4.add(chk_service[0]); hm4.add(chk_service[1]); hm4.add(chk_service[2]); hm4.add(chk_service[3]); hm4.add(chk_service[4]); add(hm4); } public CheckboxFieldChangeListener (CheckboxField[] arrFields) { m_arrFields = arrFields; } public void fieldChanged(Field field, int context) { if(true == ((CheckboxField) field).getChecked()) { for(int i = 0; i < m_arrFields.length; i++) { if(m_arrFields[i]==field) { //a[j]=i; j++; break; } else { CheckboxField oField = m_arrFields[i]; oField.setChecked(true); } } } a[k] = j; if(false == ((CheckboxField) field).getChecked()) { for(int i =field.getIndex(); i < m_arrFields.length; i++) { if(m_arrFields[i]==field) { //a[j]=i; j++; break; } else { CheckboxField oField = m_arrFields[i]; oField.setChecked(false); } } } } }

    Read the article

  • WCF Error: the client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • function using cl-who:with-html-output ignoring parameter

    - by shanked
    I'm not sure whether this is an issue with my use of cl-who (specifically with-html-output-to-string and with-html-output) or an issue with my understanding of Common Lisp (as this is my first project using Lisp). I created a function to create form fields: (defun form-field (type name label) (cl-who:with-html-output (*standard-output* nil) (:div :class "field" (:label :for name label) (:input :type type :name name)))) When using this function, ie: (form-field "text" "username" "Username") the parameter label seems to be ignored... the HTML output is: <div class="field"><label for="username"></label> <input type="text" name="username"/></div> instead of the expected output: <div class="field"><label for="username">Username</label> <input type="text" name="username"/></div> If I modify the function and add a print statement: (defun form-field (type name label) (cl-who:with-html-output (*standard-output* nil) (print label) (:div :class "field" (:label :for name label) (:input :type type :name name)))) The "Username" string is successfully output (but still ignored in the HTML)... any ideas what might cause this? Keep in mind, I'm calling this function within a cl-who:with-html-output-to-string for use with hunchentoot.

    Read the article

  • "jpeglib.h: No such file or directory" ghostscript port in OPENBSD

    - by holms
    Hello I have a problem with compiling a ghostscript from ports in openbsd 4.7. SO i have jpeg-7 installed, I have latest port tree for obsd4.7. ===> Building for ghostscript-8.63p11 mkdir -p /usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63/obj gmake LDFLAGS='-L/usr/local/lib -shared' GS_XE=./obj/../obj/libgs.so.11.0 STDIO_IMPLEMENTATION=c DISPLAY_DEV=./obj/../obj/display.dev BINDIR=./obj/../obj GLGENDIR=./obj/../obj GLOBJDIR=./obj/../obj PSGENDIR=./obj/../obj PSOBJDIR=./obj/../obj CFLAGS='-O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\"' prefix=/usr/local ./obj/../obj/gsc gmake[1]: Entering directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' cc -I./obj/../obj -I./src -DHAVE_MKSTEMP -O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\" -DGX_COLOR_INDEX_TYPE='unsigned long long' -o ./obj/../obj/sdctc.o -c ./src/sdctc.c In file included from src/sdctc.c:17: obj/jpeglib_.h:1:21: jpeglib.h: No such file or directory In file included from src/sdctc.c:19: src/sdct.h:58: error: field `err' has incomplete type src/sdct.h:70: error: field `err' has incomplete type src/sdct.h:72: error: field `cinfo' has incomplete type src/sdct.h:73: error: field `destination' has incomplete type src/sdct.h:84: error: field `err' has incomplete type src/sdct.h:87: error: field `dinfo' has incomplete type src/sdct.h:88: error: field `source' has incomplete type gmake[1]: *** [obj/../obj/sdctc.o] Error 1 gmake[1]: Leaving directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' gmake: *** [so] Error 2 *** Error code 2 Stop in /usr/ports/print/ghostscript/gnu (line 2225 of /usr/ports/infrastructure/mk/bsd.port.mk). I tried to place one more param in CFLAGS in Makefile with value "-I/usr/local" but no luck =( People in irc [freenode server, #openbsd channel] refuses give any help for ports at all, and even more - because this is 4.7 unstable version. I have my reasons to use this version and ports believe me =) CFLAGS+= -DSYS_TYPES_HAS_STDINT_TYPES \ -I${LOCALBASE}/include \ -I${LOCALBASE}/include/ijs \ -I${LOCALBASE}/include/libpng \

    Read the article

  • Does my API design violate RESTful principles?

    - by peta
    Hello everybody, I'm currently (I try to) designing a RESTful API for a social network. But I'm not sure if my current approach does still accord to the RESTful principles. I'd be glad if some brighter heads could give me some tips. Suppose the following URI represents the name field of a user account: people/{UserID}/profile/fields/name But there are almost hundred possible fields. So I want the client to create its own field views or use predefined ones. Let's suppose that the following URI represents a predefined field view that includes the fields "name", "age", "gender": utils/views/field-views/myFieldView And because field views are kind of higher logic I don't want to mix support for field views into the "people/{UserID}/profile/fields" resource. Instead I want to do the following: utils/views/field-views/myFieldView/{UserID} Though Leonard Richardson & Sam Ruby state in their book "RESTful Web Services" that a RESTful design is somehow like an "extreme object oriented" approach, I think that my approach is object oriented and therefore accords to RESTful principles. Or am I wrong? When not: Are such "object oriented" approaches generally encouraged when used with care and in order to avoid query-based REST-RPC hybrids? Thanks for your feedback in advance, peta

    Read the article

  • How do I match complete XML objects in a string?

    - by cyclotis04
    I'm attempting to find complete XML objects in a string. They have been placed in the string by an XmlSerializer, but may or may not be complete. I've toyed with the idea of using a regular expression, because it seems like the kind of thing they were built for, except for the fact that I'm trying to parse XML. I'm trying to find complete objects in the form: <?xml version="1.0"?> <type> <field>value</field> ... </type> My thought was a regex to find <?xml version="1.0"?><type> and </type>, but if a field has the same name as type, it obviously won't work. There's plenty of documentation on XML parsers, but they seem to all need a complete, fully-formed document to parse. My XML objects can be in a string surrounded by pretty much anything else (including other complete objects). hw<e>reR@lot$0fr@ndm&nchrs%<?xml version="1.0"?><type><field>...</field>...</type>@ndH#r$omOre!!>nuT6erjc?y!<?xml version="1.0"?><type><field>...</field>...</type>ty!=] A regex would be able to match a string while excluding the random characters, but not find a complete XML object. I'd like some way to extract an object, parse it with a serializer, then repeat until the string contains no more valid objects.

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >