Search Results

Search found 18046 results on 722 pages for 'custom component'.

Page 489/722 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • C# Windows Service XML

    - by Goober
    Scenario I have a windows service written in C# that performs some processing based on parsing an XML file and use that data to carry out various tasks. The service also does various bits of logging - which uses settings from an APP.Config file. The Problem When the service is compiled, installed and run, the XML file seems to disappear. I'm getting the impression that it is just ignored or something like that. So far I've tried using TWO App.Config files, one named App.Config that contains settings for the service, and the other called MyService.exe.config that contains all of the data that was used in the XML file (the idea being that I can parse the XML from a config file that actually gets compiled and appears in my installation directory. However When I do this, all that happens is that ONE config file appears (with the name MyService.exe.config), but it contains the contents of the App.Config file and not the XML data that I want to parse. What I need All I want is to have a config file for my settings, and an XML file for my data. Question Is this possible? I know the application works as it was originally built as a console application that ran fine. Other The application has to be designed this way (as in, I need my data stored as XML, and my settings stored in a config file). Thoughts If I could somehow combine the contents of the two files into ONE config file, that would be one way of solving the problem. However, I have tried this and of course I get a "Type Initialisation Exception", as the config file cannot interprate the XML data (probably because the tags are custom and do not form any part of the config schema - or something like that). Ideas Please could someone explain to me if it is possible for me to have an XML file AND a config file that will actually be compiled and stored in my installation directory for the service when it is run? CODE Custom XML/Data Config File <?xml version="1.0" encoding="utf-8" ?> <configuration> <servers> <SV066930> <add name="Name" value = "SV066930" /> <processes> <SimonTest1> <add name="ProcessName" value="notepad.exe" /> <add name="CommandLine" value="C:\\WINDOWS\\system32\\notepad.exe C:\\WINDOWS\\Profiles\\TA2TOF1\\Desktop\\SimonTest1.txt" /> </SimonTest1> </processes> </SV066930> </servers> </configuration> APP.Config Settings File <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="dataConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Data.Configuration.DatabaseSettings, Microsoft.Practices.EnterpriseLibrary.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxxxx" /> </configSections> <connectionStrings> <add name="DB" connectionString="Data Source=etc......" /> </connectionStrings> </configuration> Help greatly appreciated.

    Read the article

  • SharePoint Feature suggestion

    - by barathan
    I have written a feature(Site scoped) that adds custom menu items to the New Menu and EditControlBlock of document library. These menu items should show up only when the user has add and edit permissions for that document library. If he selected the menu, url is redirected to my webpart. Webpart is deployed in site collection. To do this i have two way. I mentioned in as case 1 & case 2. But in the both cases i failed to fulfill my requirement Below are the sample entries in Feature and Element manifest file I am passing the current location to sourceurl in order to get the folder url <?xml version="1.0" encoding="utf-8" ?> <Feature Id="59bba8e7-0cfc-46e3-9285-4597f8085e76" Title="My Custom Menus" Scope="Site" xmlns="http://schemas.microsoft.com/sharepoint/"> <ElementManifests> <ElementManifest Location="Elements.xml" /> </ElementManifests></Feature> Case 1: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="EditMenu1" RegistrationType="FileType" RegistrationId="txt" Location="EditControlBlock" Sequence="106" ImageUrl="/_layouts/images/PPT16.GIF" Title="My Edit Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="javascript:var surl='{SiteUrl}'; window.location='/test/mypage.aspx?siteurl='+surl+'&amp;itemurl={ItemUrl}&amp;itemid={ItemId}&amp;listid={ListId}&amp;Source='+window.location" /> </CustomAction> <CustomAction Id="NewMenu1" GroupId="NewMenu" RegistrationType="List" RegistrationId="101" Location="Microsoft.SharePoint.StandardMenu" Sequence="1002" ImageUrl ="/_layouts/images/DOC32.GIF" Title="My New Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="javascript:var surl='{SiteUrl}'; window.location='/test/mypage.aspx?siteurl='+surl+'&amp;listid={ListId}&amp;Source='+window.location" /> </CustomAction> </Elements> If i use the above code, it was not redirected to site collection instead of it is redirecting to rootsite. Is there is any way to get the site collection variable. To overcome this issue i used the following code: Case 2: <?xml version="1.0" encoding="utf-8" ?> <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <CustomAction Id="EditMenu1" RegistrationType="FileType" RegistrationId="txt" Location="EditControlBlock" Sequence="106" ImageUrl="/_layouts/images/PPT16.GIF" Title="My Edit Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="~sitecollection/test/mypage.aspx?siteurl={SiteUrl}&amp;itemurl={ItemUrl}&amp;itemid={ItemId}&amp;listid={ListId}&amp;Source=/" /> </CustomAction> <CustomAction Id="NewMenu1" GroupId="NewMenu" RegistrationType="List" RegistrationId="101" Location="Microsoft.SharePoint.StandardMenu" Sequence="1002" ImageUrl ="/_layouts/images/DOC32.GIF" Title="My New Menu" Rights="AddListItems,EditListItems"> <UrlAction Url="~sitecollection/test/mypage.aspx?siteurl={SiteUrl}&amp;listid={ListId}&amp;Source=/" /> </CustomAction> </Elements> But in this case, it is correctly redirected to the site collection. But it fails to get the folder url because current location can't pass through in this case. while creating new document. Could you please suggest me either how to get the site collection url in the case 1 or how to pass the current location to the sourceul in case 2

    Read the article

  • Templates and inheritance

    - by mariusz
    Hello, I have a big problem. I use additional controls for Wpf. One of them is Telerik RadWindow This control is already templated. Now I want to create custom Window with will inherit from RadWindow, and make custom template, eg. One base window will contains grid and two buttons, second base window will contain two grids (master - detail). The problem is that templates do not support inheritance. Perhaps is another way to template only the content of Winodow? My code, that doesn't work (empty window appears, so template doesn't apply) <Style TargetType="{x:Type local:TBaseRjWindow}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:TBaseRjContent}"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}"> <Grid Name="mGrid"> <Grid.ColumnDefinitions> <ColumnDefinition /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition MaxHeight="40" MinHeight="30" /> <RowDefinition MaxHeight="40" MinHeight="30" /> <RowDefinition Height="Auto" /> <RowDefinition MaxHeight="40" MinHeight="30" /> </Grid.RowDefinitions> <telerik:RadGridView Margin="10,10,10,10" Name="grid" Grid.Row="0" Grid.Column="0" VerticalAlignment="Stretch" HorizontalAlignment="Stretch" ScrollMode="Deferred" AutoGenerateColumns="False" Width="Auto" > </telerik:RadGridView> <telerik:RadDataPager Grid.Row="1" Grid.Column="0" x:Name="radDataPager" PageSize="50" AutoEllipsisMode="None" DisplayMode="First, Previous, Next, Text" Margin="10,0,10,0"/> <StackPanel Grid.Row="1" Grid.Column="0" Margin="5 5 5 5" HorizontalAlignment="Left" Orientation="Horizontal" Height="20" Width="Auto" VerticalAlignment="Center" > <telerik:RadButton x:Name="btAdd" Margin="5 0 5 0" Content="Dodaj" /> <telerik:RadButton x:Name="btEdit" Margin="5 0 5 0" Content="Edytuj" /> <telerik:RadButton x:Name="btDelete" Margin="5 0 5 0" Content="Usun" /> </StackPanel> <StackPanel Name="addFields" Background="LightGray" Visibility="Collapsed" VerticalAlignment="Top" Grid.Row="2" Grid.Column="0" Width="Auto" Height="Auto" Orientation="Horizontal"> <GroupBox Header="Szczegoly" Margin="2 2 2 2" > <Grid VerticalAlignment="Top" DataContext="{Binding SelectedItem, ElementName=grid}" Name="_gAddFields" Margin="0 0 0 0" Width="Auto" Height="Auto" > </Grid> </GroupBox> </StackPanel> <StackPanel Grid.Row="3" Grid.Column="0" Margin="5 5 5 5" HorizontalAlignment="Right" Orientation="Horizontal" Height="25" Width="Auto" VerticalAlignment="Center" > <telerik:RadButton x:Name="btSave" IsDefault="True" Width="60" Margin="5 0 5 0" Content="Zapisz" /> <telerik:RadButton x:Name="btOK" IsDefault="True" Width="60" Margin="5 0 5 0" Content="Akceptuj" /> <telerik:RadButton x:Name="btCancel" IsCancel="True" Width="60" Margin="5 0 5 0" Content="Anuluj" /> </StackPanel> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> Please help

    Read the article

  • Trying get dynamic content hole-punched through Magento's Full Page Cache

    - by rlflow
    I am using Magento Enterprise 1.10.1.1 and need to get some dynamic content on our product pages. I am inserting the current time in a block to quickly see if it is working, but can't seem to get through full page cache. I have tried a variety of implementations found here: http://tweetorials.tumblr.com/post/10160075026/ee-full-page-cache-hole-punching http://oggettoweb.com/blog/customizations-compatible-magento-full-page-cache/ http://magentophp.blogspot.com/2011/02/magento-enterprise-full-page-caching.html (http://www.exploremagento.com/magento/simple-custom-module.php - custom module) Any solutions, thoughts, comments, advice is welcome. here is my code: app/code/local/Fido/Example/etc/config.xml <?xml version="1.0"?> <config> <modules> <Fido_Example> <version>0.1.0</version> </Fido_Example> </modules> <global> <blocks> <fido_example> <class>Fido_Example_Block</class> </fido_example> </blocks> </global> </config> app/code/local/Fido/Example/etc/cache.xml <?xml version="1.0" encoding="UTF-8"?> <config> <placeholders> <fido_example> <block>fido_example/view</block> <name>example</name> <placeholder>CACHE_TEST</placeholder> <container>Fido_Example_Model_Container_Cachetest</container> <cache_lifetime>86400</cache_lifetime> </fido_example> </placeholders> </config> app/code/local/Fido/Example/Block/View.php <?php /** * Example View block * * @codepool Local * @category Fido * @package Fido_Example * @module Example */ class Fido_Example_Block_View extends Mage_Core_Block_Template { private $message; private $att; protected function createMessage($msg) { $this->message = $msg; } public function receiveMessage() { if($this->message != '') { return $this->message; } else { $this->createMessage('Hello World'); return $this->message; } } protected function _toHtml() { $html = parent::_toHtml(); if($this->att = $this->getMyCustom() && $this->getMyCustom() != '') { $html .= '<br />'.$this->att; } else { $now = date('m-d-Y h:i:s A'); $html .= $now; $html .= '<br />' ; } return $html; } } app/code/local/Fido/Example/Model/Container/Cachetest.php <?php class Fido_Example_Model_Container_Cachetest extends Enterprise_PageCache_Model_Container_Abstract { protected function _getCacheId() { return 'HOMEPAGE_PRODUCTS' . md5($this->_placeholder->getAttribute('cache_id') . $this->_getIdentifier()); } protected function _renderBlock() { $blockClass = $this->_placeholder->getAttribute('block'); $template = $this->_placeholder->getAttribute('template'); $block = new $blockClass; $block->setTemplate($template); return $block->toHtml(); } protected function _saveCache($data, $id, $tags = array(), $lifetime = null) { return false; } } app/design/frontend/enterprise/[mytheme]/template/example/view.phtml <?php /** * Fido view template * * @see Fido_Example_Block_View * */ ?> <div> <?php echo $this->receiveMessage(); ?> </span> </div> snippet from app/design/frontend/enterprise/[mytheme]/layout/catalog.xml <reference name="content"> <block type="catalog/product_view" name="product.info" template="catalog/product/view.phtml"> <block type="fido_example/view" name="product.info.example" as="example" template="example/view.phtml" />

    Read the article

  • Hover/Fadeto/Toggle Multiple Class Changing

    - by Slick Willis
    So my problem is rather simple and complex at the same time. I am trying to create links that fade in when you mouseover them and fade out when you mouseout of them. At the same time that you are going over them I would like a pic to slide from the left. This is the easy part, I have every thing working. The image fades and another image slides. I did this by using a hover, fadeto, and toggle("slide"). I would like to do this in a table format with multiple images being able to be scrolled over and sliding images out. The problem is that I am calling my sliding image to a class and when I hover over the letters both images slide out. Does anybody have a solution for this? I posted the code that I used below: <html> <head> <script type='text/javascript' src='http://accidentalwords.squarespace.com/storage/jquery/jquery-1.4.2.min.js'></script> <script type='text/javascript' src='http://accidentalwords.squarespace.com/storage/jquery/jquery-custom-181/jquery-ui-1.8.1.custom.min.js'></script> <style type="text/css"> .text-slide { display: none; margin: 0px; width: 167px; height: 50px; } </style> <script> $(document).ready(function(){ $(".letterbox-fade").fadeTo(1,0.25); $(".letterbox-fade").hover(function () { $(this).stop().fadeTo(250,1); $(".text-slide").toggle("slide", {}, 1000); }, function() { $(this).stop().fadeTo(250,0.25); $(".text-slide").toggle("slide", {}, 1000); }); }); </script> </head> <body style="background-color: #181818"> <table> <tr> <td><div class="letterbox-fade"><img src="http://accidentalwords.squarespace.com/storage/sidebar/icons/A-Letterbox-Selected.png" /></div></td> <td><div class="text-slide"><img src="http://accidentalwords.squarespace.com/storage/sidebar/icons/TEST.png" /></div></td> </tr> <tr> <td><div class="letterbox-fade"><img src="http://accidentalwords.squarespace.com/storage/sidebar/icons/B-Letterbox-Selected.png" /></div></td> <td><div class="text-slide"><img src="http://accidentalwords.squarespace.com/storage/sidebar/icons/TEST.png" /></div></td> </tr> </table> </body> </html>

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • jQuery animation menu height

    - by StealthRT
    Hey all i have the following jsfiddle Fiddle that i need some help on. When i have my mouse over it-it expands out to a static width but, depending on the text length, it grabs it by the inner's text $('.inner').height(). Problem being is that it goes a little too far beyond the last text list item and when i roll over any of the text in the menu box it slides back up a little. How can prevent it from (1) sliding back up and (2) have the exact height needed without even having the extra space at the bottom of the box for its height? The JS: $(document).ready(function() { $('#menuSquare, .inner').mouseout(function() { theMenu('close'); }); $('#menuSquare, .inner').mouseover(function() { theMenu('open'); }); }); function theMenu(what2Do) { if (what2Do == 'open') { $('#menuSquare').stop().animate({ width: 190, //95 height: $('.inner').height(), duration:900, 'padding-top' : 10, 'padding-right' : 10, 'padding-bottom' : 10, 'padding-left' : 10, backgroundColor: '#fff', opacity: 0.8 }, 1000,'easeOutCubic') } else { $('#menuSquare').stop().animate({ width: "20", height: "20", padding: '0px', backgroundColor: '#e51937', opacity: 0.8 }, 500,'easeInCirc') } }? The HTML: <div id="menuSquare" class="TheMenuBox" style="overflow: hidden; width: 20px; height: 20px; background-color: rgb(229, 25, 55); opacity: 0.8; padding: 0px;"> <div class="inner"> <p style="text-decoration:none; color:#666; cursor: pointer; " onclick="changeImg('Custom Homes');">Custom Homes</p> <p style="text-decoration:none; color:#666; cursor: pointer; " onclick="changeImg('Full Service Hotels');">Full Service Hotels</p> <p style="text-decoration:none; color:#666; cursor: pointer; " onclick="changeImg('Mixed Use');">Mixed Use</p> <p style="text-decoration:none; color:#666; cursor: pointer; " onclick="changeImg('Office');">Office</p> <p style="text-decoration:none; color:#666; cursor: pointer; " onclick="changeImg('Retail');">Retail</p> <p style="text-decoration:none; color:#666; cursor: pointer; " onclick="changeImg('Select Service Hotels');">Select Service Hotels</p> </div> </div>

    Read the article

  • How can I keep my MVC Views, models, and model binders as clean as possible?

    - by MBonig
    I'm rather new to MVC and as I'm getting into the whole framework more and more I'm finding the modelbinders are becoming tough to maintain. Let me explain... I am writing a basic CRUD-over-database app. My domain models are going to be very rich. In an attempt to keep my controllers as thin as possible I've set it up so that on Create/Edit commands the parameter for the action is a richly populated instance of my domain model. To do this I've implemented a custom model binder. As a result, though, this custom model binder is very specific to the view and the model. I've decided to just override the DefaultModelBinder that ships with MVC 2. In the case where the field being bound to my model is just a textbox (or something as simple), I just delegate to the base method. However, when I'm working with a dropdown or something more complex (the UI dictates that date and time are separate data entry fields but for the model it is one Property), I have to perform some checks and some manual data munging. The end result of this is that I have some pretty tight ties between the View and Binder. I'm architecturally fine with this but from a code maintenance standpoint, it's a nightmare. For example, my model I'm binding here is of type Log (this is the object I will get as a parameter on my Action). The "ServiceStateTime" is a property on Log. The form values of "log.ServiceStartDate" and "log.ServiceStartTime" are totally arbitrary and come from two textboxes on the form (Html.TextBox("log.ServiceStartTime",...)) protected override object GetPropertyValue(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, IModelBinder propertyBinder) { if (propertyDescriptor.Name == "ServiceStartTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceStartDate").ConvertTo(typeof (string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceStartTime").ConvertTo(typeof (string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } if (propertyDescriptor.Name == "ServiceEndTime") { string date = bindingContext.ValueProvider.GetValue("log.ServiceEndDate").ConvertTo(typeof(string)) as string; string time = bindingContext.ValueProvider.GetValue("log.ServiceEndTime").ConvertTo(typeof(string)) as string; DateTime dateTime = DateTime.Parse(date + " " + time); return dateTime; } The Log.ServiceEndTime is a similar field. This doesn't feel very DRY to me. First, if I refactor the ServiceStartTime or ServiceEndTime into different field names, the text strings may get missed (although my refactoring tool of choice, R#, is pretty good at this sort of thing, it wouldn't cause a build-time failure and would only get caught by manual testing). Second, if I decided to arbitrarily change the descriptors "log.ServiceStartDate" and "log.ServiceStartTime", I would run into the same problem. To me, runtime silent errors are the worst kind of error out there. So, I see a couple of options to help here and would love to get some input from people who have come across some of these issues: Refactor any text strings in common between the view and model binders out into const strings attached to the ViewModel object I pass from controller to the aspx/ascx view. This pollutes the ViewModel object, though. Provide unit tests around all of the interactions. I'm a big proponent of unit tests and haven't started fleshing this option out but I've got a gut feeling that it won't save me from foot-shootings. If it matters, the Log and other entities in the system are persisted to the database using Fluent NHibernate. I really want to keep my controllers as thin as possible. So, any suggestions here are greatly welcomed! Thanks

    Read the article

  • How to debug KVO

    - by user8472
    In my program I use KVO manually to observe changes to values of object properties. I receive an EXC_BAD_ACCESS signal at the following line of code inside a custom setter: [self willChangeValueForKey:@"mykey"]; The weird thing is that this happens when a factory method calls the custom setter and there should not be any observers around. I do not know how to debug this situation. Update: The way to list all registered observers is observationInfo. It turned out that there was indeed an object listed that points to an invalid address. However, I have no idea at all how it got there. Update 2: Apparently, the same object and method callback can be registered several times for a given object - resulting in identical entries in the observed object's observationInfo. When removing the registration only one of these entries is removed. This behavior is a little counter-intuitive (and it certainly is a bug in my program to add multiple entries at all), but this does not explain how spurious observers can mysteriously show up in freshly allocated objects (unless there is some caching/reuse going on that I am unaware of). Modified question: How can I figure out WHERE and WHEN an object got registered as an observer? Update 3: Specific sample code. ContentObj is a class that has a dictionary as a property named mykey. It overrides: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"mykey"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } A couple of properties have getters and setters as follows: - (CGFloat)value { return [[[self mykey] objectForKey:@"value"] floatValue]; } - (void)setValue:(CGFloat)aValue { [self willChangeValueForKey:@"mykey"]; [[self mykey] setObject:[NSNumber numberWithFloat:aValue] forKey:@"value"]; [self didChangeValueForKey:@"mykey"]; } The container class has a property contents of class NSMutableArray which holds instances of class ContentObj. It has a couple of methods that manually handle registrations: + (BOOL)automaticallyNotifiesObserversForKey:(NSString *)theKey { BOOL automatic = NO; if ([theKey isEqualToString:@"contents"]) { automatic = NO; } else { automatic=[super automaticallyNotifiesObserversForKey:theKey]; } return automatic; } - (void)observeContent:(ContentObj *)cObj { [cObj addObserver:self forKeyPath:@"mykey" options:0 context:NULL]; } - (void)removeObserveContent:(ContentObj *)cObj { [cObj removeObserver:self forKeyPath:@"mykey"]; } - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (([keyPath isEqualToString:@"mykey"]) && ([object isKindOfClass:[ContentObj class]])) { [self willChangeValueForKey:@"contents"]; [self didChangeValueForKey:@"contents"]; } } There are several methods in the container class that modify contents. They look as follows: - (void)addContent:(ContentObj *)cObj { [self willChangeValueForKey:@"contents"]; [self observeDatum:cObj]; [[self contents] addObject:cObj]; [self didChangeValueForKey:@"contents"]; } And a couple of others that provide similar functionality to the array. They all work by adding/removing themselves as observers. Obviously, anything that results in multiple registrations is a bug and could sit somewhere hidden in these methods. My question targets strategies on how to debug this kind of situation. Alternatively, please feel free to provide an alternative strategy for implementing this kind of notification/observer pattern.

    Read the article

  • change value upon select

    - by Link
    what i'm aiming is to show the other div when it selects one of the two options Full time and Part Time and if possible compute a different value for each When the user selects Part time the value of PrcA will change to PrcB this is the code i used <!====================================================================================> <script language="javascript"> <!--// function dm(amount) { string = "" + amount; dec = string.length - string.indexOf('.'); if (string.indexOf('.') == -1) return string + '.00'; if (dec == 1) return string + '00'; if (dec == 2) return string + '0'; if (dec > 3) return string.substring(0,string.length-dec+3); return string; } function calculate() { QtyA = 0; TotA = 0; PrcA = 1280; PrcB = 640; if (document.form1.qtyA.value > "") { QtyA = document.form1.qtyA.value }; document.form1.qtyA.value = eval(QtyA); TotA = QtyA * PrcA; document.form1.totalA.value = dm(eval(TotA)); Totamt = eval(TotA) ; document.form1.GrandTotal.value = dm(eval(Totamt)); } //--> </script> <!====================================================================================> <p> <label for="acct" style="margin-right:90px;"><strong>Account Type<strong><font color=red size=3> * </font></strong></label> <select name="acct" style="background-color:white;" class="validate[custom[serv]] select-input" id="acct" value=""> <option value="Full Time">Full-Time</option> <option value="Part Time">Part-Time</option> <option selected="selected" value=""></option> </select></p> <!====================================================================================> <script> $(document).ready(function() { $("input[name$='acct']").select(function() { var test = $(this).val(); $("div.desc").hide(); $("#acct" + test).show(); }); }); </script> <!====================================================================================> <p> <table><tr><td> <lable style="margin-right:91px;"># of Agent(s)<font color=red size=3> * </font></lable> </td><td> <input style="width:25px; margin-left:5px;" type="text" class="validate[custom[agnt]] text-input" name="qtyA" id="qtyA" onchange="calculate()" /> </td><td> <div id="acctFull Time" class="desc"> x 1280 = </div> <div id="acctPart Time" class="desc" style="display:none"> x 640 = </div> </td><td> $<input style="width:80px; margin-left:5px;" type="text" readonly="readonly" name="totalA" id="totalA" onchange="calculate()" /> </p> </td></tr></table> is there any way for me to achieve this?

    Read the article

  • SQL Authority News – Download Microsoft SQL Server 2014 Feature Pack and Microsoft SQL Server Developer’s Edition

    - by Pinal Dave
    Yesterday I attended the SQL Server Community Launch in Bangalore and presented on Performing an effective Presentation. It was a fun presentation and people very well received it. No matter on what subject, I present, I always end up talking about SQL. Here are two of the questions I had received during the event. Q1) I want to install SQL Server on my development server, where can we get it for free or at an economical price (I do not have MSDN)? A1) If you are not going to use your server in a production environment, you can just get SQL Server Developer’s Edition and you can read more about it over here. Here is another favorite question which I keep on receiving it during the event. Q2) I already have SQL Server installed on my machine, what are different feature pack should I install and where can I get them from. A2) Just download and install Microsoft SQL Server 2014 Service Pack. Here is the link for downloading it. The Microsoft SQL Server 2014 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server. It includes tool and components for Microsoft SQL Server 2014 and add-on providers for Microsoft SQL Server 2014. Here is the list of component this product contains: Microsoft SQL Server Backup to Windows Azure Tool Microsoft SQL Server Cloud Adapter Microsoft Kerberos Configuration Manager for Microsoft SQL Server Microsoft SQL Server 2014 Semantic Language Statistics Microsoft SQL Server Data-Tier Application Framework Microsoft SQL Server 2014 Transact-SQL Language Service Microsoft Windows PowerShell Extensions for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Shared Management Objects Microsoft Command Line Utilities 11 for Microsoft SQL Server Microsoft ODBC Driver 11 for Microsoft SQL Server – Windows Microsoft JDBC Driver 4.0 for Microsoft SQL Server Microsoft Drivers 3.0 for PHP for Microsoft SQL Server Microsoft SQL Server 2014 Transact-SQL ScriptDom Microsoft SQL Server 2014 Transact-SQL Compiler Service Microsoft System CLR Types for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Remote Blob Store SQL RBS codeplex samples page SQL Server Remote Blob Store blogs Microsoft SQL Server Service Broker External Activator for Microsoft SQL Server 2014 Microsoft OData Source for Microsoft SQL Server 2014 Microsoft Balanced Data Distributor for Microsoft SQL Server 2014 Microsoft Change Data Capture Designer and Service for Oracle by Attunity for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Master Data Service Add-in for Microsoft Excel Microsoft SQL Server StreamInsight Microsoft Connector for SAP BW for Microsoft SQL Server 2014 Microsoft SQL Server Migration Assistant Microsoft SQL Server 2014 Upgrade Advisor Microsoft OLEDB Provider for DB2 v5.0 for Microsoft SQL Server 2014 Microsoft SQL Server 2014 PowerPivot for Microsoft SharePoint 2013 Microsoft SQL Server 2014 ADOMD.NET Microsoft Analysis Services OLE DB Provider for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Analysis Management Objects Microsoft SQL Server Report Builder for Microsoft SQL Server 2014 Microsoft SQL Server 2014 Reporting Services Add-in for Microsoft SharePoint Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • Using JCA Adapter with OSB 11.1.1.3

    - by James Taylor
    In OSB 10g to use the JCA adapters you were required to use JDeveloper to create the necessary WSDLs and XSDs etc using the associated adapter wizard. These files were imported into Oracle Workshop (Eclipse) and used to create the business service as you would any other web service. In 11g unfortunately JDeveloper is still required. The process has changed slightly as described below. As an example I have used the JCA DB adapter as an example. Start JDeveloper 11.1.1.3 Create a new SOA Application Create a new SOA Project and call it DBAdapters. Choose the Empty Composite Template Drag a Database Adapter Component to the External References panel on the composite. Provide a service name. Create a new database connection, or use an existing one Take note of the JNDI Name, e.g. eis/DB/MyConnection This will be used to configure the DB connection in the WebLogic Console. In my example I use a stored procedure, but you can use what ever operation you require. Please refer to the following link for other options: User's Guide for Technology Adapters Select a schema and stored procedure Once the procedure has been selected, accept the defaults and finish. Startup your OEPE version of Eclipse. Create a new Oracle Service Bus Configuration Project (you can use an existing project if you have one) Create a new Oracle Service Bus Project in the configuration project created above. Instead of importing the WSDL and XSD files you import the jca file created in JDeveloper. In Eclipse right click the Oracle Service Bus Project and select Import –> Import    Choose File System Browse to the directory where JDeveloper stores its project Select the jca, wsdl, and xsd files based on the service you created in step 5. Also check the ‘Create selected folders only’ radio button. When you import you may have a little red x indicating the files are invalid. This is due to the location of the files. Open the invalid files and fix the path in relation to where you store your files in the OSB project.   Once you have the files all valid, Right-Click the jca file and select Oracle Service Bus –> Generate Service. This will create a new Business Service. In the WebLogic Console configure the JNDI name defined in step 7. You can now deploy your project and test

    Read the article

  • SSIS code smell – Unused columns in the dataflow

    - by jamiet
    A code smell is defined on Wikipedia as being a “symptom in the source code of a program that possibly indicates a deeper problem”. It’s a term commonly used by our code-writing brethren to describe sub-optimal code but I think the term can be applied equally well to SSIS packages too as I shall now explain One of my pet hates about SSIS development is packages that throw warnings of the form: The output column "ColumnName" (1358) on output "OLE DB Source Output" (1289) and component "OLE_SRC Name" (1279) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.  The warning is fairly self-explanatory – any column that appears in the data flow but doesn’t get used will throw this warning when the data flow is executed. Its not the negligible performance degradation that they cause that bothers me though, it’s the clutter that they cause in your log file/table. Take a look at the following screenshot if you don’t believe me: There are 231409 such warnings in the system that I took this screenshot from, that is 231409 log records that should not be there. The most infuriating thing about this warning is that it is so easily avoidable; eliminating such columns is a very quick and easy thing to do in the SSIS Designer. The only problem I see is that the warnings don’t occur until you execute the package – it would be preferable for the designer to have an unobtrusive way of informing you of them as well. Anyway, I digress… I consider such warnings to be a code smell because, to me, they’re symptomatic of a lack of due care and attention; a lack of developer discipline if you will. What other code smells can you think of when building SSIS packages? If I get a good list in the comments maybe I’ll compile them into a later blog post. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • IIS 7&rsquo;s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • Integrating Oracle Forms Applications 11g Into SOA (4-6/Mai/10)

    - by Claudia Costa
    Workshop Description This is a Free workshop of 3 days is targeted at Oracle Forms professionals interested in integrating Oracle Forms into a Service Oriented Architecture. The workshop highlights how Forms can be part of a Service Oriented Architecture, how the Oracle Forms functionalities make it possible to integrate existing (or new) Forms applications with new or existing development utilizing the Service Oriented Architecture concepts. The goal is to understand the incremental approach that Forms provides to developers who need to extend their business platform to JEE, allowing Oracle Forms customers to retain their investment in Oracle Forms while leveraging the opportunities offered by complementing technologies. During the event the attendees will implement the Oracle Forms functionalities that make it possible to integrate with SOA. Register Now! Prerequisites ·         Knowledge of the Oracle Forms development environment (mandatory) ·         Basic knowledge of the Oracle database ·         Basic knowledge of the Java Programming Language ·         Basic knowledge of Oracle Jdeveloper or another Java IDE   System Requirements   This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements:   ·         Laptop/PC with minimum 4 GB RAM ·         Oracle Database ·         Oracle Forms 11g R1 PS1 (WebLogic Server 10.1.3.2 + Portal, Forms, Reports and Discoverer ) ·         Oracle JDeveloper 11g R1 PS1 http://download.oracle.com/otn/java/jdeveloper/1112/jdevstudio11112install.exe ·         TCP-IP Loopback Adapter Installation (before the SOASuite installation) ·         Oracle SOASuite 11g R1 PS1 (without BAM component) When asked for an admin password, please use 'welcome1 http://download.oracle.com/otn/nt/middleware/11g/ofm_rcu_win_11.1.1.2.0_disk1_1of1.zip http://download.oracle.com/otn/nt/middleware/11g/ofm_soa_generic_11.1.1.2.0_disk1_1of1.zip ·         Oracle BI Publisher 10.1.3.4.1 http://download.oracle.com/otn/nt/ias/101341/bipublisher_windows_x86_101341.zip ·         Oracle BI Publisher Desktop 10.1.3.4. http://download.oracle.com/otn/nt/ias/101341/bipublisher_desktop_windows_x86_101341.zip   ·         At least 1 Oracle Forms solution already upgraded to the Oracle FMW 11g platform.   ------------------------------------------------------------------------------------------   Horário e Local:   4-6 de Maio / 9:30-18:00h Oracle, Porto Salvo Register Here Para mais informação por favor contacte: [email protected]

    Read the article

  • JMX Based Monitoring - Part Two - JVM Monitoring

    - by Anthony Shorten
    This the second article in the series focussing on the JMX based monitoring capabilities possible with the Oracle Utilities Application Framework. In all versions of the Oracle utilities Application Framework, it is possible to use the basic JMX based monitoring available with the Java Virtual Machine to provide basic statistics ablut the JVM. In Java 5 and above, the JVM automatically allowed local monitoring of the JVM statistics from an approporiate console. When I say local I mean the monitoring tool must be executed from the same machine (and in some cases the same user that is running the JVM) to connect to the JVM directly. If you are using jconsole, for example, then you must have access to a GUI (X-Windows or Windows) to display the jconsole output. This is the easist way of monitoring without doing too much configration but is not always practical. Java offers a remote monitorig capability to allow yo to connect to a remotely executing JVM from a console (like jconsole). To use this facility additional JVM options must be added to the command line that started the JVM. Details of the additional options for the version of the Java you are running is located at the JMX information site. Typically to remotely connect to a running JVM that JVM must be configured with the following categories of options: JMX Port - The JVM must allow connections on a listening port specified on the command line Connection security - The connection to the JVM can be secured. This is recommended as JMX is not just a monitoring protocol it is a managemet protocol. It is possible to change values in a running JVM using JMX and there are NO "Are you sure?" safeguards. For a Oracle Utilities Application Framework based application there are a few guidelines when configuring and using this JMX based remote monitoring of the JVM's: Online JVM - The JVM used to run the online system is embedded within the J2EE Web Application Server. To enable JMX monitoring on this JVM you can either change the startup script that starts the Web Application Server or check whether your J2EE Web Application natively supports JVM statistics collection. Child JVM's (COBOL only) - The Child JVM's should not be monitored using this method as they are recycled regularly by the configuration and therefore statistics collected are of little value. Batch Threadpoools - Batch already has a JMX interface (which will be covered in another article). Additional monitoring can be enabled but the base supported monitoring is sufficient for most needs. If you are an Oracle Utilities Application Framework site, then you can specify the additional options for JMX Java monitoring on the OPTS paramaters supported for each component of the architecture. Just ensure the port numbers used are unique for each JVM running on any machine.

    Read the article

  • Next Phase of ECM 11g Now Available - New UCM & URM 11g, & Updated I/PM & IRM 11g

    - by michelle.huff
    We're excited to announce that the Oracle Enterprise Content Management Suite 11g is now available! Today, Oracle announced ECM Suite 11g, a part of Fusion Middleware 11gR1 Patchset 2 release, which builds upon the Imaging and Process Management (I/PM) and Information Rights Management (IRM) 11g release earlier this year. Universal Content Management (UCM) and Universal Records Management (URM) 11g are now available with many new features and enhancements. All ECM products are localized into 27 languages, use a single repository, a single installer, centralized administration, and all run on the same Fusion Middleware tech stack. Oracle ECM Suite 11g, is better integrated to fit the way you work, with extreme performance and extreme scalability. Universal Content Management One click Web content management - brings Web content management authoring, design and presentation capabilities directly into how organizations design sites, portals, and custom Web applications. Simply take in the right amount of WCM that meets your needs - all without having to rewrite the application or port it over to a new technology stack or framework. Greater business user empowerment - with next generation desktop integrations and "smart productivity folders", new Web site "design mode" for business users, and enhanced rich media support enabling users to better work with photography, graphics, videos & podcasts created today as well as contribute content within Flash files directly from the Web. Advanced manageability with extreme performance & scalability - centralized system monitoring, installation, logging, performance metrics & diagnostics, with new built in "fast check-in" features, redesigned component management interface - all running on Fusion Middleware infrastructure. Universal Records Management Enhanced user experience: Oracle URM 11g makes records management easier for both business users and records administrators. Simplifications in the end user experience allow the creation of bookmarks into often-used part of the file plan, easy copying of categories and dispositions, and integrated folder and records search. The records management dashboard provides a consolidated view into records administrator tasks and system performance. DoD 5015.02 v3: Oracle URM is fully certified against all part of the US Department of Defense records management standard - baseline, classified, and Freedom of Information and Privacy Act. This enables Federal, state, & local governments & public agencies, as well as private companies, to maintain regulated compliance. Expanded functionality through Oracle integrations: Oracle URM 11g allows for an expanded set of functionality through integration capabilities with other Oracle products. This includes configurable records definition capabilities directly within a UCM instance. An out of the box integration with Oracle BI Publisher provides easily configured and robust reporting. Additionally, 11g offers an out of the box Oracle Secure Enterprise Search integration enabling real time full text discovery across disparate systems in an organization. Read the Press Release Watch the 3 Minute ECM 11g Video Get Up to Speed with the What's New in ECM Suite Datasheet Learn More on OTN with new tutorials, downloads and whitepapers

    Read the article

  • UnsatisfiedLinkError on xawt when running HEC-HMS.sh

    - by G.Oxsen
    I am a recent adopter of Linux and this problem has got me stumped. I use HEC-HMS and HEC-DSSVue for work on a regular basis. I have been using the widows versions in wine but they are really buggy. So I decided to try out the linux versions. the links below will take you to the download pages for these two programs. They are free programs for Hydrology and data management. Once I install them and attempt to run the shell file (HEC-HMS.sh for example) I get a ton of java errors that I do not understand. If I had to guess I would say that the java files in question can not be found. When I check to see if java is installed it is. Here is the output from the terminal from trying to run HEC-HMS.sh: Exception in thread "Thread-1" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at java.awt.Component.<clinit>(Unknown Source) at javax.swing.ImageIcon.<clinit>(Unknown Source) at hms.i.c(Unknown Source) at hms.i.b(Unknown Source) at hms.K.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-4" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Unknown Source) at java.awt.Toolkit.<clinit>(Unknown Source) at sun.print.CUPSPrinter.<clinit>(Unknown Source) at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at sun.print.UnixPrintServiceLookup.refreshServices(Unknown Source) at sun.print.UnixPrintServiceLookup$PrinterChangeListener.run(Unknown Source) Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class java.awt.Toolkit at java.awt.Color.<clinit>(Unknown Source) at hms.model.l.<init>(Unknown Source) at hms.model.ProjectManager.<init>(Unknown Source) at hms.Hms.<init>(Unknown Source) at hms.Hms.main(Unknown Source) Exception in thread "Thread-2" java.lang.NoClassDefFoundError: Could not initialize class sun.print.CUPSPrinter at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at javax.print.PrintServiceLookup.lookupDefaultPrintService(Unknown Source) at hms.util.f.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I get similar outputs when I try to run HEC-DSSVue.sh. If anyone could shed some light on a solution I would really appreciate it. The problem turned out to be that the program needed 32 bit versions of the particular dependencies.

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • Guidance and Pricing for MSDN 2010

    - by John Alexander
    Sorry for the rather lengthy post here. I get asked this all the time so I decided to post it…Visual Studio 2010 editions will be available on April 12, 2010. Product Features Professional with MSDN Essentials Professional with MSDN Premium with MSDN Ultimate with MSDN Test Professional with MSDN Debugging and Diagnostics IntelliTrace (Historical Debugger)         Static Code Analysis       Code Metrics       Profiling       Debugger   Testing Tools Unit Testing   Code Coverage       Test Impact Analysis       Coded UI Test       Web Performance Testing         Load Testing1         Microsoft Test Manager 2010       Test Case Management2       Manual Test Execution       Fast-Forward for Manual Testing       Lab Management Configuration3       Integrated Development Environment Multiple Monitor Support   Multi-Targeting   One Click Web Deployment   JavaScript and jQuery Support   Extensible WPF-Based Environment Database Development Database Deployment       Database Change Management2       Database Unit Testing       Database Test Data Generation       Data Access   Development Platform Support Windows Development   Web Development   Office and SharePoint Development   Cloud Development   Customizable Development Experience   Architecture and Modeling Architecture Explorer         UML® 2.0 Compliant Diagrams (Activity, Use Case, Sequence, Class, Component)         Layer Diagram and Dependency Validation         Read-only diagrams (UML, Layer, DGML Graphs)         Lab Management Virtual environment setup & tear down3       Provision environment from template3       Checkpoint environment3       Team Foundation Server Version Control2   Work Item Tracking2   Build Automation2   Team Portal2   Reporting & Business Intelligence2   Agile Planning Workbook2   Microsoft Visual Studio Team Explorer 2010   Test Case Management2       MSDN Subscription – Software and Services for Production Use Windows Azure Platform 20 hrs/mo † 50 hrs/mo † 100 hrs/mo † 250 hrs/mo † n/a Microsoft Visual Studio Team Foundation Server 2010   Microsoft Visual Studio Team Foundation Server 2010 CAL   1 1 1 1 Microsoft Expression Studio 3       Microsoft Office Professional Plus 2010, Project Professional 2010, Visio Premium 2010 (following Office 2010 launch)       MSDN Subscription – Software for Development and Testing 4 Windows 7, Windows Server 2008 R2 and SQL Server 2008 Toolkits, Software Development Kits, Driver Development Kits Previous versions of Windows (client and server operation systems)   Previous versions of Microsoft SQL Server   Microsoft Office       Microsoft Dynamics       All other Servers       Windows Embedded operating systems       Teamprise         MSDN Subscription – Other Benefits Technical support incidents 0 2 4 4 2 Priority support in MSDN Forums Microsoft e-learning collections (typically 10 courses or 20 hours) 0 1 2 2 1 MSDN Flash newsletter MSDN Online Concierge MSDN Magazine   System Requirements View View View View View Buy from (MSRP) $799 $1,199 $5,469 $11,899 $2,169 Renew from (MSRP) $549 (upgrade) $799 $2,299 $3,799 $899 † Availability varies by country and subscription level.  Details available on the MSDN site 1. May require one or more Microsoft Visual Studio Load Test Virtual User Pack 2010 2. Requires Team Foundation Server and a Team Foundation Server CAL 3. Requires Microsoft Visual Studio Lab Management 2010 4. Per-user license allows unlimited installations and use for designing, developing, testing, and demonstrating applications. UML is a registered trademark of Object Management Group, Inc. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries.

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • Java Space on Parleys

    - by Yolande Poirier
    Now available! A great selection of JavaOne 2010 and JVM Language Summit 2010 sessions as well as Oracle Technology Network TechCasts on the new Java Space on Parleys website. Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices. The current selection is a well of expertise from top Java luminaries and Oracle experts. JavaOne 2010 sessions: ·        Best practices for signing code by Sean Mullan   ·        Building software using rich client platforms by Rickard Thulin ·        Developing beyond the component libraries by Ryan Cuprak ·        Java API for keyhole markup language by Florian Bachmann ·        Avoiding common user experience anti-patterns by Burk Hufnagel ·        Accelerating Java workloads via GPUs by Gary Frost JVM Languages Summit 2010 sessions: ·      Mixed language project compilation in Eclipse by Andy Clement  ·      Gathering the threads by John Rose  ·      LINQ: language features for concurrency by Neal Gafter  ·      Improvements in OpenJDK useful for JVM languages by Eric Caspole  ·      Symmetric Multilanguage - VM Architecture by Oleg Pliss  Special interviews with Oracle experts on product innovations: ·      Ludovic Champenois, Java EE architect on Glassfish 3.1 and Java EE. ·      John Jullion-Ceccarelli and Martin Ryzl on NetBeans IDE 6.9 You can chose to listen to a section of talks using the agenda view and search for related content while watching a presentation.  Enjoy the Java content and vote on it! 

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >