Search Results

Search found 64909 results on 2597 pages for 'service application'.

Page 465/2597 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • ControlTemplate Exception: "XamlParseException: Cannot find a Resource with the Name/Key"

    - by akaphenom
    If I move the application resources to a UserControl resources everythgin runs groovy. I still don't understand why. I noticed that my application object MyApp did more than inherit from Application it loaded an XAML for the main template and connected all the plumbing. So I decided to create a user control to remove the template from the applciation (thinking there may be an even order issue not allowing my resource to be found). namespace Module1 type Template() as this = //inherit UriUserControl("/FSSilverlightApp;component/template.xaml", "") inherit UriUserControl("/FSSilverlightApp;component/templateSimple.xaml", "") do Application.LoadComponent(this, base.uri) let siteTemplate : Grid = (this.Content :?> FrameworkElement) ? siteTemplate let nav : Frame = siteTemplate ? contentFrame let pages : UriUserControl array = [| new Module1.Page1() :> UriUserControl ; new Module1.Page2() :> UriUserControl ; new Module1.Page3() :> UriUserControl ; new Module1.Page4() :> UriUserControl ; new Module1.Page5() :> UriUserControl ; |] do nav.Navigate((pages.[0] :> INamedUriProvider).Uri) |> ignore type MyApp() as this = inherit Application() do Application.LoadComponent(this, new System.Uri("/FSSilverlightApp;component/App.xaml", System.UriKind.Relative)) do System.Windows.Browser.HtmlPage.Plugin.Focus() this.RootVisual <- new Template() ; // test code to check for the existance of the ControlTemplate - it exists let a = Application.Current.Resources.MergedDictionaries let b = a.[0] let c = b.Count let d : ControlTemplate = downcast c.["TransitioningFrame"] () "/FSSilverlightApp;component/templateSimple.xaml" <UserControl x:Class="Module1.Template" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Grid HorizontalAlignment="Center" Background="White" Name="siteTemplate"> <StackPanel Grid.Row="3" Grid.Column="2" Name="mainPanel"> <!--Template="{StaticResource TransitioningFrame}"--> <navigation:Frame Name="contentFrame" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Template="{StaticResource TransitioningFrame}"/> </StackPanel> </Grid> </UserControl> "/FSSilverlightApp;component/App.xaml" <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> "/FSSilverlightApp;component/TransitioningFrame.xaml" <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="Olive" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="5" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <ContentPresenter Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Content="{TemplateBinding Content}"/> </Border> </ControlTemplate> </ResourceDictionary> Unfortunately that did not work. If I remove the template attribute from the navigationFrame element, the app loads and directs the content area to the first page in the pages array. Referencing that resource continues to throw a resource no found error. Original Post I have the following app.xaml (using Silverlight 3) <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="Module1.MyApp"> <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/FSSilverlightApp;component/TransitioningFrame.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> </Application> and content template: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:navigation="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Navigation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ControlTemplate x:Key="TransitioningFrame" TargetType="navigation:Frame"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"> <ContentPresenter Cursor="{TemplateBinding Cursor}" HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" Margin="{TemplateBinding Padding}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" Content="{TemplateBinding Content}"/> </Border> </ControlTemplate> </ResourceDictionary> The debugger says it the contentTemplate is loaded correctly by adding some minimal code: type MyApp() as this = inherit Application() do Application.LoadComponent(this, new System.Uri("/FSSilverlightApp;component/App.xaml", System.UriKind.Relative)) let cc = new ContentControl() let mainGrid : Grid = loadXaml("MainWindow.xaml") do this.Startup.Add(this.startup) let t = Application.Current.Resources.MergedDictionaries let t1 = t.[0] let t2 = t1.Count let t3: ControlTemplate = t1.["TransitioningFrame"] With this line in my main.xaml <navigation:Frame Name="contentFrame" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Template="{StaticResource TransitioningFrame}"/> Yields this exception Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Timestamp: Mon, 24 May 2010 23:10:15 UTC Message: Unhandled Error in Silverlight Application Code: 4004 Category: ManagedRuntimeError Message: System.Windows.Markup.XamlParseException: Cannot find a Resource with the Name/Key TransitioningFrame [Line: 86 Position: 115] at MS.Internal.XcpImports.CreateFromXaml(String xamlString, Boolean createNamescope, Boolean requireDefaultNamespace, Boolean allowEventHandlers, Boolean expandTemplatesDuringParse) at MS.Internal.XcpImports.CreateFromXaml(String xamlString, Boolean createNamescope, Boolean requireDefaultNamespace, Boolean allowEventHandlers) at System.Windows.Markup.XamlReader.Load(String xaml) at Globals.loadXaml[T](String xamlPath) at Module1.MyApp..ctor() Line: 54 Char: 13 Code: 0 URI: file:///C:/fsharp/FSSilverlightDemo/FSSilverlightApp/bin/Debug/SilverlightApplication2TestPage.html

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • How to programmatically start a WPF application from a unit test?

    - by Lernkurve
    Problem VS2010 and TFS2010 support creating so-called Coded UI Tests. All the demos I have found, start with the WPF application already running in the background when the Coded UI Test begins or the EXE is started using the absolute path to it. I, however, would like to start my WPF application under test from the unit test code. That way it'll also work on the build server and on my peer's working copies. How do I accomplish that? My discoveries so far a) This post shows how to start a XAML window. But that's not what I want. I want to start the App.xaml because it contains XAML resources and there is application logic in the code behind file. b) The second screenshot on this post shows a line starting with ApplicationUnterTest calculatorWindow = ApplicationUnderTest.Launch(...); which is conceptually pretty much what I am looking for, except that again this example uses an absolute path the the executable file. c) A Google search for "Programmatically start WPF" didn't help either.

    Read the article

  • WCF Error: the client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • Netbeans / persistence API error

    - by Danny
    I am following this tutorial, I'm using netbeans 6.5.1 http://www.netbeans.org/kb/docs/java/gui-db-custom.html When I get to the part where I create the "new entity class from database", which is in the "Customizing the Master/Detail View" section of the tutoiral. I can't ever compile because I get this error in the task list (and I get a runtime error when I run)... Atleast I think these two are related. Error Named queries can be defined only on an Entity or MappedSuperclass class. Countries.java 27 C:/Users/Danny/dev/NetBeansProjects/Test/src/test Error Named queries can be defined only on an Entity or MappedSuperclass class. Products.java 28 C:/Users/Danny/dev/NetBeansProjects/Test/src/test note that all I do is do newentity classes from database, and do exactly as the tutorial says. I stopped on the tutoral at the "Adding Dialog Boxes" heading, because the tutorial writer says the application should be "partially functional" which makes me think it should atleast RUN. I haven't edited the generated code outside of what I've been instructed to do. This is the output produced by the netbeans output console: run: Jun 23, 2009 3:03:23 PM org.jdesktop.application.Application$1 run SEVERE: Application class test.TestApp failed to launch javax.persistence.PersistenceException: No Persistence provider for EntityManager named MyBusinessRecordsPU: Provider named oracle.toplink.essentials.PersistenceProvider threw unexpected exception at create EntityManagerFactory: oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Local Exception Stack: Exception [TOPLINK-30005] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Exception Description: An exception was thrown while searching for persistence archives with ClassLoader: sun.misc.Launcher$AppClassLoader@11b86e7 Internal Exception: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException.exceptionSearchingForPersistenceResources(PersistenceUnitLoadingException.java:143) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:169) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:110) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:643) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.callPredeploy(JavaSECMPInitializer.java:171) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initPersistenceUnits(JavaSECMPInitializer.java:239) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initialize(JavaSECMPInitializer.java:255) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:155) ... 14 more Caused by: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.EntityManagerSetupException.predeployFailed(EntityManagerSetupException.java:228) ... 19 more Caused by: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.ValidationException.invalidMapping(ValidationException.java:1069) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataValidator.throwInvalidMappingEncountered(MetadataValidator.java:275) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.OneToManyAccessor.process(OneToManyAccessor.java:161) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.RelationshipAccessor.processRelationship(RelationshipAccessor.java:290) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.processRelationshipDescriptors(MetadataProject.java:579) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.process(MetadataProject.java:512) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProcessor.processAnnotations(MetadataProcessor.java:246) at oracle.toplink.essentials.ejb.cmp3.persistence.PersistenceUnitProcessor.processORMetadata(PersistenceUnitProcessor.java:370) at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:607) ... 18 more The following providers: oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider Returned null to createEntityManagerFactory. at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:154) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Exception in thread "AWT-EventQueue-0" java.lang.Error: Application class test.TestApp failed to launch at org.jdesktop.application.Application$1.run(Application.java:177) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: javax.persistence.PersistenceException: No Persistence provider for EntityManager named MyBusinessRecordsPU: Provider named oracle.toplink.essentials.PersistenceProvider threw unexpected exception at create EntityManagerFactory: oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Local Exception Stack: Exception [TOPLINK-30005] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException Exception Description: An exception was thrown while searching for persistence archives with ClassLoader: sun.misc.Launcher$AppClassLoader@11b86e7 Internal Exception: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.PersistenceUnitLoadingException.exceptionSearchingForPersistenceResources(PersistenceUnitLoadingException.java:143) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:169) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:110) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209) at java.awt.EventQueue.dispatchEvent(EventQueue.java:597) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) Caused by: javax.persistence.PersistenceException: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:643) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.callPredeploy(JavaSECMPInitializer.java:171) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initPersistenceUnits(JavaSECMPInitializer.java:239) at oracle.toplink.essentials.internal.ejb.cmp3.JavaSECMPInitializer.initialize(JavaSECMPInitializer.java:255) at oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider.createEntityManagerFactory(EntityManagerFactoryProvider.java:155) ... 14 more Caused by: Exception [TOPLINK-28018] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.EntityManagerSetupException Exception Description: predeploy for PersistenceUnit [MyBusinessRecordsPU] failed. Internal Exception: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.EntityManagerSetupException.predeployFailed(EntityManagerSetupException.java:228) ... 19 more Caused by: Exception [TOPLINK-7244] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ValidationException Exception Description: An incompatible mapping has been encountered between [class test.Products] and [class test.Orders]. This usually occurs when the cardinality of a mapping does not correspond with the cardinality of its backpointer. at oracle.toplink.essentials.exceptions.ValidationException.invalidMapping(ValidationException.java:1069) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataValidator.throwInvalidMappingEncountered(MetadataValidator.java:275) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.OneToManyAccessor.process(OneToManyAccessor.java:161) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.accessors.RelationshipAccessor.processRelationship(RelationshipAccessor.java:290) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.processRelationshipDescriptors(MetadataProject.java:579) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProject.process(MetadataProject.java:512) at oracle.toplink.essentials.internal.ejb.cmp3.metadata.MetadataProcessor.processAnnotations(MetadataProcessor.java:246) at oracle.toplink.essentials.ejb.cmp3.persistence.PersistenceUnitProcessor.processORMetadata(PersistenceUnitProcessor.java:370) at oracle.toplink.essentials.internal.ejb.cmp3.EntityManagerSetupImpl.predeploy(EntityManagerSetupImpl.java:607) ... 18 more The following providers: oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider Returned null to createEntityManagerFactory. at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:154) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:83) at test.TestView.initComponents(TestView.java:337) at test.TestView.(TestView.java:39) at test.TestApp.startup(TestApp.java:19) at org.jdesktop.application.Application$1.run(Application.java:171) ... 8 more BUILD SUCCESSFUL (total time: 2 seconds) I'm thinking my netbeans application must be misconfigured or something, I can't seem to find anyone with the same problem as myself

    Read the article

  • Change the Default Application Pool in IIS7 using .net?

    - by EdenMachine
    I'm using the following function to create a IIS7 Application and/or Virtual Directory. How would I also set the Application to use a different Application Pool? Private Sub CreateVirtualDir(ByVal WebSite As String, ByVal AppName As String, ByVal Path As String, Optional ByVal IsApplication As Boolean = True, Optional ByVal RunScripts As Boolean = True, Optional ByVal IsWrite As Boolean = True) Dim IISSchema As New System.DirectoryServices.DirectoryEntry("IIS://" & WebSite & "/Schema/AppIsolated") Dim CanCreate As Boolean = Not IISSchema.Properties("Syntax").Value.ToString.ToUpper() = "BOOLEAN" IISSchema.Dispose() If CanCreate Then Dim PathCreated As Boolean Try Dim IISAdmin As New System.DirectoryServices.DirectoryEntry("IIS://" & WebSite & "/W3SVC/1/Root") 'make sure folder exists If Not System.IO.Directory.Exists(Path) Then System.IO.Directory.CreateDirectory(Path) PathCreated = True End If 'If the virtual directory already exists then delete it For Each VD As System.DirectoryServices.DirectoryEntry In IISAdmin.Children If VD.Name = AppName Then IISAdmin.Invoke("Delete", New String() {VD.SchemaClassName, AppName}) IISAdmin.CommitChanges() Exit For End If Next VD 'Create and setup new virtual directory Dim VDir As System.DirectoryServices.DirectoryEntry = IISAdmin.Children.Add(AppName, "IIsWebVirtualDir") VDir.Properties("Path").Item(0) = Path If IsApplication Then VDir.Properties("AppFriendlyName").Item(0) = AppName End If VDir.Properties("EnableDirBrowsing").Item(0) = False VDir.Properties("AccessRead").Item(0) = True VDir.Properties("AccessExecute").Item(0) = False VDir.Properties("AccessWrite").Item(0) = IsWrite VDir.Properties("AccessScript").Item(0) = RunScripts VDir.Properties("AuthNTLM").Item(0) = True VDir.Properties("EnableDefaultDoc").Item(0) = True VDir.Properties("DefaultDoc").Item(0) = "default.htm,default.aspx,default.asp" VDir.Properties("AspEnableParentPaths").Item(0) = True 'VDir.Properties("AppCreate").Item(0) = False VDir.CommitChanges() 'the following are acceptable params 'INPROC = 0 'OUTPROC = 1 'POOLED = 2 If IsApplication Then VDir.Invoke("AppCreate", 1) Else VDir.Invoke("AppCreate", False) End If Catch Ex As Exception If PathCreated Then System.IO.Directory.Delete(Path) End If 'MsgBox(Ex.Message) End Try End If End Sub

    Read the article

  • Advice needed on best and most efficient practices with developing google apps application...

    - by Ali
    Hi guys , I'm getting my feet wet with developing my order management applications for integration with google apps. However there are certain aspects I need to take into consideration prior to proceeding any further. My application is such that it would upload documents to google documents and store contacts in google contacts. It requires such that a single order can have a number of uploaded documents associated with it as well as some contacts associated with it. MY question however is what would be the most efficient way to implement this. I could keep key tables for both contacts and documents which woudl contain just an ID and link to the documents/contacts or their respective identification id on google. Or I could maintain an exact replica of the information on my own database as well as a link to the contact on google. However won't that be too redundant. I don't want my application to be really slow as I'm afraid that everytime I make a call to google docs to retrieve a list of documents or google contacts it would be really slow on my application - or am I getting worried for no reason? Any advice would be most appreciated.

    Read the article

  • How should I implement reverse AJAX in a Django application?

    - by Carson Myers
    How should I implement reverse AJAX when building a chat application in Django? I've looked at Django-Orbited, and from my understanding, this puts a comet server in front of the HTTP server. This seems fine if I'm just running the Django development server, but how does this work when I start running the application from mod_wsgi? How does having the orbited server handling every request scale? Is this the correct approach? I've looked at another approach (long polling) that seems like it would work, although I'm not sure what all would be involved. Would the client request a page that would live in its own thread, so as not to block the rest of the application? Would it even block? Wouldn't the script requested by the client have to continuously poll for information? Which of the approaches is more proper? Which is more portable, scalable, sane, etc? Are there other good approaches to this (aside from the client polling for messages) that I have overlooked?

    Read the article

  • SmtpClient.SendAsync - How to ensure my application doesn't finish before callback?

    - by James
    Hi, I need to send emails asychronously through a console application. I need to do some DB updates on the callback but my application is exiting before the callback code gets run! How can I stop this from happening in a nice manner rather than simply guessing how long to wait before exiting. I would imagine the Async calls get placed in some form of thread? Is it possible to check if any are waiting to be called? Sample Code private static void SendCompletedCallback(object sender, AsyncCompletedEventArgs e) { // Get the unique identifier for this asynchronous operation. String token = (string) e.UserState; if (e.Cancelled) { Console.WriteLine("[{0}] Send canceled.", token); } if (e.Error != null) { Console.WriteLine("[{0}] {1}", token, e.Error.ToString()); } else { // update DB Console.WriteLine("Message sent."); } } public static void Main(string[] args) { var users = Repository.GetUsers(); SmtpClient client = new SmtpClient("Host"); client.SendCompleted += new SendCompletedEventHandler(SendCompletedCallback); MailAddress from = new MailAddress("[email protected]", "System", Encoding.UTF8); foreach (var user in users) { MailAddress to = new MailAddress(user.Email); MailMessage message = new MailMessage(from, to); message.Body = "This is a test"; message.BodyEncoding = System.Text.Encoding.UTF8; message.Subject = "test message 1" + someArrows; message.SubjectEncoding = System.Text.Encoding.UTF8; string userState = String.Format("Message for user id {0}", user.ID); client.SendAsync(message, userState); message.Dispose(); } // need to wait here until I have received a callback for each message // otherwise the application will exit }

    Read the article

  • How to create a high quality icon for my Windows application?

    - by Patrick Klug
    If you are running Windows with a higher DPI setting you will notice that most application icons on the desktop look terrible. Even high profile application icons such as Google Chrome look terrible while for example Firefox, Skype and MS Office icons look sharp: (example) I suspect that most icons look blurry because a lower resolution icon is scaled up rather than using a higher resolution icon. I want to give my application a high quality icon and can't seem to convince Windows to use the higher resolution icon. I have created a multi-resolution icon with the free icon editor IcoFX. The icon is provided in 16x16, 24x24, 32x32,48x48, 128x128 and 256x256 (!) (all in 32 bit including alpha channel) yet Windows seems to use the 128x128 version of the icon on the desktop and scale it up which looks terrible. (I am using Windows 7 - 64 bit - the icon is placed by means of setting up a shortcut in the msi (created via Visual Studio 2008 Setup Project) and pointing it to the .ico file that contains the multi-resolution icon) I have tried removing the 128x128 icon but to no avail. Interestingly in Windows Explorer the icon looks great even when using the Extra Large Icon setting. How can I create a high quality desktop icon that looks great on higher DPI settings on Windows?

    Read the article

  • How do I create Twitter style URL's for my app - Using existing application or app redesign - Ruby o

    - by bgadoci
    I have developed a blog application of sorts that I am trying to allow other users to take advantage of (for free and mostly for family). I wondering if the authentication I have set up will allow for such a thing. Here is the scenario. Currently the application allows for users to sign up for an account and when they do so they can create blog posts and organize those posts via tags. The application displays no data publicly (another words, you have to login to see anything). To gain access you have to create an account and even after you do, you cannot see anyone else's information as the applications filters using the current_user method and displays in the /posts/index.html.erb page. This would be great if a user only wanted to blog and share it with themselves, not really what I am looking for. My question has two parts (hopefully I won't make anyone mad by not putting these into two questions) Is it possible for a particular users data to live at www.myapplication.com/user without moving everything to the /user/show.html.erb file? Is it possible to make some of that information (living at the URL) public but still require login for create and destroy actions. Essentially, exactly like twitter. I am just curious if I can get from where I am (using the current_user methods across controllers to display in /posts/index.html.erb) to where I want to be. My fear is that I have to redesign the app such that the user data lives in the /user/show.html.erb page. Thoughts?

    Read the article

  • How do I create Twitter style URLs for my app - Using existing application or app redesign - Ruby on

    - by bgadoci
    I have developed a blog application of sorts that I am trying to allow other users to take advantage of (for free and mostly for family). I wondering if the authentication I have set up will allow for such a thing. Here is the scenario. Currently the application allows for users to sign up for an account and when they do so they can create blog posts and organize those posts via tags. The application displays no data publicly (another words, you have to login to see anything). To gain access you have to create an account and even after you do, you cannot see anyone else's information as the applications filters using the current_user method and displays in the /posts/index.html.erb page. This would be great if a user only wanted to blog and share it with themselves, not really what I am looking for. My question has two parts (hopefully I won't make anyone mad by not putting these into two questions) Is it possible for a particular users data to live at www.myapplication.com/user without moving everything to the /user/show.html.erb file? Is it possible to make some of that information (living at the URL) public but still require login for create and destroy actions. Essentially, exactly like twitter. I am just curious if I can get from where I am (using the current_user methods across controllers to display in /posts/index.html.erb) to where I want to be. My fear is that I have to redesign the app such that the user data lives in the /user/show.html.erb page. Thoughts? UPDATE: I am using Clearance for authentication by Thoughtbot. I wonder if there is something I can set in the vendored gem path to represent the /posts/index.html.erb code as the /user/id code and replace id with the user name.

    Read the article

  • How to detect if an application has UI elements in it from C# in Windows 7?

    - by Santhosh
    I have a c# application in Windows 7 that runs in Session 0. This application is basically a framework for software patches installation that will install patches in the background (in session 0). So this app will download patches from the server and start installing them on the client machines. The way it installs the patches is by calling CreateProcess("Patch.exe"). Now mostly, Patch.exe will be a non-ui silent installation and henceforth, installing the patch from session 0 goes through successfully. However, sometimes this Patch.exe happens to have some UI elements in it such as prompting the user for some details (like installation location, etc..) and let us say that these UI elements cannot be avoided. So is it possible for my installation framework (that runs in Session 0 written in C#), to know that the process Patch.exe which was created by my framework has any UI elements in it? The reason I ask is, if I determine that the application has any UI elements in it, then I do not want to continue with the installation (a crude way of doing this would be to kill the installer process Patch.exe, but that is a different story and not of concern here).

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • How to ship numpy with web2py application under myapp/modules?

    - by Newbie07
    I am having the following error while importing numpy from application/myapp/modules: Traceback (most recent call last): File "/home/mdipierro/make_web2py/web2py/gluon/restricted.py", line 212, in restricted File "D:/web2py_win/web2py/applications/myapp/controllers/default.py", line 13, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 100, in custom_importer File "applications\myapp\modules\numpy\ __init__.py", line 137, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 81, in custom_importer ImportError: Cannot import module 'add_newdocs' I tried adding 'application.myapp.modules.' in the 'import add_newdocs' statement of numpy\ __init.py__ and the error propagates to other subsequent imports(i.e. add_docs imports some other stuff and I get the ImportError again for these imports). So I narrowed down the problem to the "working directory" of the import statement. However, I do not wish to add 'application.myapp.modules.' in every import statement inside the package since it would be impractical and hard to edit if someone decides to rename the app later on. How do I make the import work smoothly? NOTE: It is necessary for me to put the numpy package in the app to ensure ease of deployment.

    Read the article

  • How to store a file on a server(web container) through a Java EE web application?

    - by Yatendra Goel
    I have developed a Java EE web application. This application allows a user to upload a file with the help of a browser. Once the user has uploaded his file, this application first stores the uploaded file on the server (on which it is running) and then processes it. At present, I am storing the file on the server as follows: try { FormFile formFile = programForm.getTheFile(); // formFile represents the uploaded file String path = getServlet().getServletContext().getRealPath("") + "/" + formFile.getFileName(); System.out.println(path); file = new File(path); outputStream = new FileOutputStream(file); outputStream.write(formFile.getFileData()); } where, the formFile represents the uploaded file. Now, the problem is that it is running fine on some servers but on some servers the getServlet().getServletContext().getRealPath("") is returning null so the final path that I am getting is null/filename and the file doesn't store on the server. When I checked the API for ServletContext.getRealPath() method, I found the following: public java.lang.String getRealPath(java.lang.String path) Returns a String containing the real path for a given virtual path. For example, the path "/index.html" returns the absolute file path on the server's filesystem would be served by a request for "http://host/contextPath/index.html", where contextPath is the context path of this ServletContext. The real path returned will be in a form appropriate to the computer and operating system on which the servlet container is running, including the proper path separators. This method returns null if the servlet container cannot translate the virtual path to a real path for any reason (such as when the content is being made available from a .war archive). So, Is there any other way by which I can store files on those servers also which is returning null for getServlet().getServletContext().getRealPath("")

    Read the article

  • wcf erorr: The client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • How to ensure DB security for a Windows Forms application?

    - by Vilx-
    The basic setup is classic - you're creating a Windows Forms application that connects to a DB and does all kinds of enterprise-y stuff. Naturally, such an application will have many users with different access rights in the DB, and each with their own login name and password. So how do you implement this? One way is to create a DB login for every application user, but that's a pretty serious thing to do, which even requires admin rights on the DB server, etc. If the DB server hosts several applications, the admins are quite likely not to be happy with this. In the web world typically one creates his own "Users" table which contains all the necessary info, and uses one fixed DB login for all interaction. That is all nice for a web app, but a windows forms can't hide this master login information, negating security altogether. (It can try to hide, but all such attempts are easily broken with a bit of effort). So... is there some middle way? Perhaps logging in with a fixed login, and then elevating priviledges from a special stored procedure which checks the username and password?

    Read the article

  • Mac OS X Status Bar Application - Hiding it from Cmd/Alt menu?

    - by Moddy
    I'm trying to whip up a simple little Status Bar Application in Obj-C/Cocoa. So I have done it pragmatically - declaring an NSStatusItem, adding it to the NSStatusBar and then giving it a NSMenu object. A bit like this... NSStatusBar *bar = [NSStatusBar systemStatusBar]; theItem = [bar statusItemWithLength:NSVariableStatusItemLength]; [theItem retain]; [theItem setTitle: NSLocalizedString(@"Tablet",@"")]; [theItem setHighlightMode:YES]; [theItem setMenu:theMenu]; (Example taken from "Status Bar Programming Topics", Apple Documentation) Now ideally, I'd like this application to run and not be accessible from the CMD/ALT window changing "menu" (for lack of a better word), I've seen applications do it before and would like that really. The idea is I just want it to be accessible from every window, whilst not having its own NSMenu on the status bar, and whilst not being able to have it as the active application ( - so its not able to take over the whole Status Bar, and its not able to be seen through CMD/ALT) Additionally, I was wondering if the StatusBarItem supports the ability to drag-n-drop an item onto it? I'm not sure if thats a limitation of the NSStatusBar though. I've read up on deamons and agents, but that seems far too low level/over kill for such a simplistic app! Cheers in advance!

    Read the article

  • How to prevent my C# winforms application from stealing focus when I set the visible properly to tru

    - by Fopedush
    I've got a C# winforms application that runs in the background, listening for hotkeys to be pressed. When a hotkey is pressed, my form makes a brief appearance. The form is always running, but set to hidden until I receive a hotkey event, at which time I set the visible property to true. The code looks like this: void hook_volumeDown(object sender, KeyPressedEventArgs e) { this.Visible = true; } It should be noted that the topmost property of this form is set to true. The really odd part is, after my C# app has stolen focus from another application, it will never do it again. For example: I launch my app, then launch some fullscreep app like Team Fortress 2. Then I press my hotkey. Team Fortress 2 minimizes, and I see my form. Then, however, I can restore TF2, and press my hotkey again all I want (with the desired effect), and TF2 will remain focused. At any rate, I'm looking for a way to fix this. I've found a lot of questions here covering similar problems, but all of them are related to creating/launching a new form, not making an existing one visible (unless I missed something). I could rework the application to create a new form every time I need one, but that would entail creating yet another form to be invisible all the time just to wait for hotkey events, so I'd rather leave it as it is. Any ideas?

    Read the article

  • deactivate ' pin to start ' on Application List page when pinning an app via code using C#?

    - by Ahmed Ali
    i'm creating a windows phone app ,where i've put a button to pin the app to start screen , but when press and hold the app icon on application list screen i find that the pin to start option can be used ShellTile TileToFind = ShellTile.ActiveTiles.FirstOrDefault(x => x.NavigationUri.ToString().Contains("MainPage.xaml")); // Create the Tile if we didn't find that it already exists. if (TileToFind == null) { // Create the Tile object and set some initial properties for the Tile. // The Count value of 12 shows the number 12 on the front of the Tile. Valid values are 1-99. // A Count value of 0 indicates that the Count should not be displayed. StandardTileData NewTileData = new StandardTileData { BackgroundImage = new Uri("300.png", UriKind.Relative), Title = "apptitle", BackTitle = "title", BackContent = "testing ", BackBackgroundImage = null }; // Create the Tile and pin it to Start. This will cause a navigation to Start and a deactivation of our app. ShellTile.Create(new Uri("/MainPage.xaml", UriKind.Relative), NewTileData); } else { MessageBox.Show("Already Pinned"); } how can i disable the user from pinning the application again from application list screen

    Read the article

  • How to Make my application handle Errors for a few different scenarios?

    - by NightsEVil
    so i have this code to extract a program to the temp directory then run it, the problem is it doesn't work perfectly on every computer (for some reason it hits a error or exception sometimes) string tempFolder = System.IO.Path.Combine(System.IO.Path.GetTempPath(), ""); System.Diagnostics.Process defrag1 = System.Diagnostics.Process.Start(@"Programs\Optimize\AusLogics_Defrag.exe", string.Format(" -o{0} -y", tempFolder)); defrag1.WaitForExit(); string executableDirectoryName = Path.GetDirectoryName(Application.ExecutablePath); System.Diagnostics.Process defrag2 = System.Diagnostics.Process.Start(tempFolder + "\\" + "AusLogics_Defrag" + "\\" + "DiskDefrag.exe", ""); defrag2.WaitForExit(); System.IO.Directory.Delete(tempFolder + "\\" + "AusLogics_Defrag", true); and what i wanna know is there a way that say if it starts to extract but hits a error (no matter what it is) it will automatically change and go to this code, but if it DOESN'T hit a error it continues as it was meant to? string tempFolder = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData); System.Diagnostics.Process defrag1 = System.Diagnostics.Process.Start(@"Programs\Optimize\AusLogics_Defrag.exe", string.Format(" -o{0} -y", tempFolder)); defrag1.WaitForExit(); string executableDirectoryName = Path.GetDirectoryName(Application.ExecutablePath); System.Diagnostics.Process defrag2 = System.Diagnostics.Process.Start(tempFolder + "\\" + "AusLogics_Defrag" + "\\" + "DiskDefrag.exe", ""); defrag2.WaitForExit(); System.IO.Directory.Delete(tempFolder + "\\" + "AusLogics_Defrag", true); with the path going to the application data folder? and if THAT hits a error it would change that path to this, but if it DOESN'T hit a error it continues as it was meant to? string tempFolder = System.Environment.GetEnvironmentVariable("HomeDrive");

    Read the article

  • How to configure Server Topology for exposing an internal application for external access?

    - by ronaldwidha
    Hi All, I guess this question is bordering to a Server Fault question. I'd like to know the best configuration for exposing an internal application (in this case a load balanced Asp.Net MVC application) for external access. More details about the situation: The Asp.Net MVC Application is currently running on 2 servers The 2 servers are behind a Windows Network Load Balancer All the servers are on premise/internal network I'm thinking of introducing an F5 Load balancer on off premise DMZ to replace the Windows Network Load Balancer. F5 will act as the public traffic gateway and load balancers to the 2 servers. However, I'd like the internal users to not have go through the Internet to access the app. The idea that I have so far is to keep both Windows Network Load Balancer and the F5. Each appliance will have its own IP and will have its own domain name. External users can use the public domain name which will hit F5, whereas internal users can use the internal domain name which will hit the Windows Network Load Balancer. Is this a good idea? Or is there a better way of doing this?

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >