Search Results

Search found 20640 results on 826 pages for 'key combination'.

Page 360/826 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Why cant I get a Thumb to be larger on a styled scrollbar in WPF.

    - by Tollo
    I have taken the MSDN templates for styling a scrollbar and have been trying to apply my different styles (Image on each track repeater and separate Image on the thumb) but I am unable to make the thumb change size. I have tried setting the Width on the Track.Thumb style and also in the ControlTemplate of the Thumb itself. For some reason the default size is only ever rendered even when the thumb actually occupies much more space. Does anyone know how to make the thumb render at the size that I want? The XAML I am using for the styles is here: <Style x:Key="ScrollBarThumb" TargetType="{x:Type Thumb}"> <Setter Property="Background" Value="Red" /> <Setter Property="OverridesDefaultStyle" Value="True"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Thumb}"> <Image Source="sampleimg.png" Stretch="Fill" /> </ControlTemplate> </Setter.Value> </Setter> </Style> <ControlTemplate x:Key="ArfleHBar" TargetType="{x:Type ScrollBar}"> <Grid > <Grid.ColumnDefinitions> <ColumnDefinition MaxWidth="18"/> <ColumnDefinition Width="0.00001*"/> <ColumnDefinition MaxWidth="18"/> </Grid.ColumnDefinitions> <RepeatButton Grid.Column="0" Style="{StaticResource ScrollBarLineButton}" Width="18" Command="ScrollBar.LineLeftCommand" Content="M 4 0 L 4 8 L 0 4 Z" /> <Track Name="PART_Track" Grid.Column="1" IsDirectionReversed="False" > <Track.DecreaseRepeatButton> <RepeatButton Style="{StaticResource ScrollBarPageButton}" Command="ScrollBar.PageLeftCommand" Width="100"/> </Track.DecreaseRepeatButton> <Track.Thumb> <Thumb Style="{StaticResource ScrollBarThumb}" Margin="0,1,0,1" Width="250" Padding="0,0,0,0"> </Thumb> </Track.Thumb> <Track.IncreaseRepeatButton> <RepeatButton Style="{StaticResource ScrollBarPageButton}" Command="ScrollBar.PageRightCommand" Width="100" /> </Track.IncreaseRepeatButton> </Track> <RepeatButton Grid.Column="3" Style="{StaticResource ScrollBarLineButton}" Width="18" Command="ScrollBar.LineRightCommand" Content="M 0 0 L 4 4 L 0 8 Z"/> </Grid> </ControlTemplate> <Style x:Key="ArfleBar" TargetType="{x:Type ScrollBar}"> <Setter Property="SnapsToDevicePixels" Value="True"/> <Setter Property="OverridesDefaultStyle" Value="true"/> <Style.Triggers> <Trigger Property="Orientation" Value="Horizontal"> <Setter Property="Width" Value="Auto"/> <Setter Property="Height" Value="18" /> <Setter Property="Template" Value="{StaticResource ArfleHBar}" /> </Trigger> </Style.Triggers> </Style>

    Read the article

  • Create dropdown from multi array + PHP class

    - by chris
    Hi there, Well, I am writing class that creates a DOB selection dropdown. I am having figure out the dropdown(), it seems working but not exactly. Code just creates one drop down, and under this dropdown all day, month and year data are in one selection. like: <label> <sup>*</sup>DOB</label> <select name="form_bod_year"> <option value=""/> <option selected="" value="0">1</option> <option value="1">2</option> <option value="2">3</option> <option value="3">4</option> <option value="4">5</option> <option value="5">6</option> .. <option value="29">30</option> <option value="30">31</option> <option value="1">January</option> <option value="2">February</option> .. <option value="11">November</option> <option value="12">December</option> <option selected="" value="0">1910</option> <option value="1">1911</option> .. <option value="98">2008</option> <option value="99">2009</option> <option value="100">2010</option> </select> Here is my code, I wonder that why all datas are in one selection. It has to be tree selction - Day:Month:Year. //dropdown connector class DropDownConnector { var $dropDownsDatas; var $field_label; var $field_name; var $locale; function __construct($dropDownsDatas, $field_label, $field_name, $locale) { $this-dropDownsDatas = $dropDownsDatas; $this-field_label = $field_label; $this-field_name = $field_name; $this-locale = $locale; } function getValue(){ return $_POST[$this-field_name]; } function dropdown(){ $selectedVal = $this-getValue($this-field_name); foreach($this-dropDownsDatas as $keys=$values){ foreach ($values as $key=$value){ $selected = ($key == $selectedVal ? "selected" : "" ); $options .= sprintf('%s',$key,$value); }; }; return $select_start = "$this-field_desc".$options.""; } function getLabel(){ $non_req = $this-getNotRequiredData(); $req = in_array($this-field_name, $non_req) ? '' : '*'; return $this-field_label ? $req . $this-field_label : ''; } function __toString() { $id = $this-field_name; $label = $this-getLabel(); $field = $this-dropdown(); return 'field_name.'"'.$label.''.$field.''; } } function generateForm ($lang,$country_list,$states_list,$days_of_month,$month_list,$years){ $xx = array( 'form_bod_day' = $days_of_month, 'form_bod_month' = $month_list, 'form_bod_year' = $years); echo $dropDownConnector = new DropDownConnector($xx,'DOB','bod','en-US'); } // Call php class to use class on external functionss. $avInq = new formGenerator; $lang='en-US'; echo generateForm ($lang,$country_list,$states_list,$days_of_month,$month_list,$years);

    Read the article

  • Iframe Javascript call to Flex

    - by Vince Lowe
    I have a flex application with an iframe layered on top. I want to make a call from the iframe to flex with javascript. So far i have tried this: This is the Object containing the swf embed in the ROOT document <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" id="IPRS_Dispatcher" width="1400" height="1000" codebase="http://fpdownload.macromedia.com/get/flashplayer/current/swflash.cab"> <param name="movie" value="DispatcherMain.swf" /> <param name="quality" value="high" /> <!-- <param name="bgcolor" value="${bgcolor}" /> --> <param name="allowScriptAccess" value="sameDomain" /> <param name='flashVars' value='strLang=english&strIPRSSrvHost=&strGPSSrvHost=192.168.1.130&strGPSSrvSoapPort=8081&strGPSSrvFwdPort=26000&strLoginMode=simple&strSOSSrvHost=192.168.1.80&strSOSSrvSoapPort=8082&strSOSSrvFwdPort=26001&strSOSLoginMode=simple&strUserSIP=&strUserPswd=&nDelayForMapReadySecs=10&nGPSUpdatesRateSecs=120&nGPSSubscriptionsIntervalMinutes=10&nLat=35.0&nLng=32.5&nZoomLevel=5&strClientServiceVersion=2.1.36.19&nPathDotsSize=1&nPathWidth=5&bHideAnnounce=false&bHideEmergencyPan=true&strMapMarkerLabelMode=name&key=ABQIAAAAYbXZyR09wFj6QsiYucHpGxQEO34WZEWuIFq1A7yobGXPE-K5exQV9ZYR6NIkF8LCR8wsYvlhOIYsfA' /> <embed id="IPRS_Dispatcher2" src="DispatcherMain.swf" flashVars='strLang=english&strIPRSSrvHost=&strGPSSrvHost=192.168.1.130&strGPSSrvSoapPort=8081&strGPSSrvFwdPort=26000&strLoginMode=simple&strSOSSrvHost=192.168.1.80&strSOSSrvSoapPort=8082&strSOSSrvFwdPort=26001&strSOSLoginMode=simple&strUserSIP=&strUserPswd=&nDelayForMapReadySecs=10&nGPSUpdatesRateSecs=120&nGPSSubscriptionsIntervalMinutes=10&nLat=35.0&nLng=32.5&nZoomLevel=5&strClientServiceVersion=2.1.36.19&nPathDotsSize=1&nPathWidth=5&bHideAnnounce=false&bHideEmergencyPan=true&strMapMarkerLabelMode=name&key=ABQIAAAAYbXZyR09wFj6QsiYucHpGxQEO34WZEWuIFq1A7yobGXPE-K5exQV9ZYR6NIkF8LCR8wsYvlhOIYsfA' width="1400" height="1000" name="IPRS_Dispatcher" align="middle" play="true" loop="false" quality="high" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer"> <!-- bgcolor="${bgcolor}" --> </embed> </object> I have added addcallback for the function i want to expose ExternalInterface.addCallback("sendToFlash", callFromJavaScript); FYI public function callFromJavaScript(str):void { LogAddItem( 30, str); } In my IFRAME i have added the function function callToFlash(str) { var swf = parent.top.$("#IPRS_Dispatcher"); var bool = swf.sendToFlash(str); } Now getting error in chrome - Uncaught TypeError: Object [object Object] has no method 'sendToFlash' UPDATE 25/06/2012 - output from console.log(swf) [ <embed src=?"DispatcherMain.swf" width=?"100%" height=?"100%" align=?"middle" id=?"IPRS_Dispatcher" quality=?"high" name=?"IPRS_Dispatcher" wmode=?"opaque" allowfullscreen=?"true" allowscriptaccess=?"always" pluginspage=?"http:?/?/?www.adobe.com/?go/?getflashplayer" flashvars=?"strOEM=mt&strSplashImage=./?assets/?loadinglogo.jpg&strLang=english&strSelectableLangs=english,chinese, portuguese_brazil,german,french,spanish&strIPRSSrvHost=85.118.26.10&strGPSSrvHost=85.118.26.16&strGPSSrvSoapPort=8081&strGPSSrvFwdPort=26000&strLoginMode=simple&strUserSIP=&strUserPswd=&strSOSSrvHost=85.118.26.17&strSOSSrvSoapPort=8082&strSOSSrvFwdPort=26001&strClientServicePort=&strSOSLoginMode=simple&themeColor=a7c3e3&showRTTPriority=false&showGPSUpdateRate=true&nSamePosErrMeters=300&nDelayForMapReadySecs=10&nGPSUpdatesRateSecs=65535&nGPSSubscriptionsIntervalMinutes=10&nLat=48.311058&nLng=11.636753&nZoomLevel=13.0&strClientServiceVersion=2.1.36.04&bDispatcherEndsSessions=true&nSOSSubscriptionsIntervalMinutes=1&GPSKATime=20&SOSKATime=20&nPathDotsSize=2&nPathWidth=5&bHideAnnounce=false&bHideEmergencyPan=false&bHideDebugLog=false&showMutedColumn=false&strLogFilter=&strMapMarkerLabelMode=name&key=ABQIAAAAfJEcVYS6-jYp2UOUy8Wh5xSCeXAFBxztfWxjY5w1WzTnKjnSVRS7Uu5XoOIwTg2R_tq_c0QSCPxSHw" type=?"application/?x-shockwave-flash">? ]

    Read the article

  • merge 2 php arrays which aren't of the same length by value

    - by Iain Urquhart
    Excuse me if this has indeed been asked before, I couldn't see anything that fitted my needs out of the dozens of similar titled posts out there ;) I'm trying to merge 2 php arrays which aren't of the same length, and merge them on a value that exists from identical key = values within both arrays. My first query produces an array from a nested set: array ( 1 => array ( 'node_id' => 1, 'lft' => 1, 'rgt' => 4, 'moved' => 0, 'label' => 'Home', 'entry_id' => 1, 'template_path' => '', 'custom_url' => '/', 'extra' => '', 'childs' => 1, 'level' => 0, 'lower' => 0, 'upper' => 0 ), 2 => array ( 'node_id' => 2, 'lft' => 2, 'rgt' => 3, 'moved' => 0, 'label' => 'Home', 'entry_id' => NULL, 'template_path' => '', 'custom_url' => 'http://google.com/', 'extra' => '', 'childs' => 0, 'level' => 1, 'lower' => 0, 'upper' => 0 ) ); My second array returns some additional key/values I'd like to insert to the above array: array ( 'entry_id' => 1, 'entry_title' => 'This is my title', ); I want to merge both of the arrays inserting the additional information into those that match on the key 'entry_id', as well as keeping the sub arrays which don't match. So, by combining the two arrays, I'd end up with array ( 1 => array ( 'node_id' => 1, 'lft' => 1, 'rgt' => 4, 'moved' => 0, 'label' => 'Home', 'entry_id' => 1, 'template_path' => '', 'custom_url' => '/', 'extra' => '', 'childs' => 1, 'level' => 0, 'lower' => 0, 'upper' => 0, 'entry_title' => 'This is my title' ), 2 => array ( 'node_id' => 2, 'lft' => 2, 'rgt' => 3, 'moved' => 0, 'label' => 'Home', 'entry_id' => NULL, 'template_path' => '', 'custom_url' => 'http://google.com/', 'extra' => '', 'childs' => 0, 'level' => 1, 'lower' => 0, 'upper' => 0, 'entry_title' => NULL ) ); Actually, writing this out makes me think I should do it via sql... Any help/advice greatly appreciated...

    Read the article

  • Problem detaching entire object graph in GAE-J with JDO

    - by tempy
    I am trying to load the full object graph for User, which contains a collection of decks, which then contains a collection of cards, as such: User: @PersistenceCapable(detachable = "true") @Inheritance(strategy = InheritanceStrategy.SUBCLASS_TABLE) @FetchGroup(name = "decks", members = { @Persistent(name = "_Decks") }) public abstract class User { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) protected Key _ID; @Persistent protected String _UniqueIdentifier; @Persistent(mappedBy = "_Owner") @Element(dependent = "true") protected Set<Deck> _Decks; protected User() { } } Each Deck has a collection of Cards, as such: @PersistenceCapable(detachable = "true") @FetchGroup(name = "cards", members = { @Persistent(name = "_Cards") }) public class Deck { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key _ID; @Persistent String _Name; @Persistent(mappedBy = "_Parent") @Element(dependent = "true") private Set<Card> _Cards = new HashSet<Card>(); @Persistent private Set<String> _Tags = new HashSet<String>(); @Persistent private User _Owner; } And finally, each card: @PersistenceCapable public class Card { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key _ID; @Persistent private Text _Question; @Persistent private Text _Answer; @Persistent private Deck _Parent; } I am trying to retrieve and then detach the entire object graph. I can see in the debugger that it loads fine, but then when I get to detaching, I can't make anything beyond the User object load. (No Decks, no Cards). At first I tried without a transaction to simply "touch" all the fields on the attached object before detaching, but that didn't help. Then I tried adding everything to the default fetch group, but that just generated warnings about GAE not supporting joins. I tried setting the fetch plan's max fetch depth to -1, but that didn't do it. Finally, I tried using FetchGroups as you can see above, and then retrieving with the following code: PersistenceManager pm = _pmf.getPersistenceManager(); pm.setDetachAllOnCommit(true); pm.getFetchPlan().setGroup("decks"); pm.getFetchPlan().setGroup("cards"); Transaction tx = pm.currentTransaction(); Query query = null; try { tx.begin(); query = pm.newQuery(GoogleAccountsUser.class); //Subclass of User query.setFilter("_UniqueIdentifier == TheUser"); query.declareParameters("String TheUser"); List<User> results = (List<User>)query.execute(ID); //ID = Supplied parameter //TODO: Test for more than one result and throw if(results.size() == 0) { tx.commit(); return null; } else { User usr = (User)results.get(0); //usr = pm.detachCopy(usr); tx.commit(); return usr; } } finally { query.closeAll(); if (tx.isActive()) { tx.rollback(); } pm.close(); } This also doesn't work, and I'm running out of ideas...

    Read the article

  • Many to many self join through junction table

    - by Peter
    I have an EF model that can self-reference through an intermediary class to define a parent/child relationship. I know how to do a pure many-to-many relationship using the Map command, but for some reason going through this intermediary class is causing problems with my mappings. The intermediary class provides additional properties for the relationship. See the classes, modelBinder logic and error below: public class Equipment { [Key] public int EquipmentId { get; set; } public virtual List<ChildRecord> Parents { get; set; } public virtual List<ChildRecord> Children { get; set; } } public class ChildRecord { [Key] public int ChildId { get; set; } [Required] public int Quantity { get; set; } [Required] public Equipment Parent { get; set; } [Required] public Equipment Child { get; set; } } I've tried building the mappings in both directions, though I only keep one set in at a time: modelBuilder.Entity<ChildRecord>() .HasRequired(x => x.Parent) .WithMany(x => x.Children ) .WillCascadeOnDelete(false); modelBuilder.Entity<ChildRecord>() .HasRequired(x => x.Child) .WithMany(x => x.Parents) .WillCascadeOnDelete(false); OR modelBuilder.Entity<Equipment>() .HasMany(x => x.Parents) .WithRequired(x => x.Child) .WillCascadeOnDelete(false); modelBuilder.Entity<Equipment>() .HasMany(x => x.Children) .WithRequired(x => x.Parent) .WillCascadeOnDelete(false); Regardless of which set I use, I get the error: The foreign key component 'Child' is not a declared property on type 'ChildRecord'. Verify that it has not been explicitly excluded from the model and that it is a valid primitive property. when I try do deploy my ef model to the database. If I build it without the modelBinder logic in place then I get two ID columns for Child and two ID columns for Parent in my ChildRecord table. This makes sense since it tries to auto create the navigation properties from Equipment and doesn't know that there are already properties in ChildRecord to fulfill this need. I tried using Data Annotations on the class, and no modelBuilder code, this failed with the same error as above: [Required] [ForeignKey("EquipmentId")] public Equipment Parent { get; set; } [Required] [ForeignKey("EquipmentId")] public Equipment Child { get; set; } AND [InverseProperty("Child")] public virtual List<ChildRecord> Parents { get; set; } [InverseProperty("Parent")] public virtual List<ChildRecord> Children { get; set; } I've looked at various other answers around the internet/SO, and the common difference seems to be that I am self joining where as all the answers I can find are for two different types. Entity Framework Code First Many to Many Setup For Existing Tables Many to many relationship with junction table in Entity Framework? Creating many to many junction table in Entity Framework

    Read the article

  • Programmatically adding an object and selecting the correspondig row does not make it become the CurrentRow

    - by Robert
    I'm in a struggle with the DataGridView: I do have a BindingList of some simple objects that implement INotifyPropertyChanged. The DataGridView's datasource is set to this BindingList. Now I need to add an object to the list by hitting the "+" key. When an object is added, it should appear as a new row and it shall become the current row. As the CurrentRow-property is readonly, I iterate through all rows, check if its bound item is the newly created object, and if it is, I set this row to "Selected = true;" The problem: although the new object and thereby a new row gets inserted and selected in the DataGridView, it still is not the CurrentRow! It does not become the CurrentRow unless I do a mouse click into this new row. In this test program you can add new objects (and thereby rows) with the "+" key, and with the "i" key the data-bound object of the CurrentRow is shown in a MessageBox. How can I make a newly added object become the CurrentObject? Thanks for your help! Here's the sample: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { BindingList<item> myItems; public Form1() { InitializeComponent(); myItems = new BindingList<item>(); for (int i = 1; i <= 10; i++) { myItems.Add(new item(i)); } dataGridView1.DataSource = myItems; } public void Form1_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Add) { addItem(); } } public void addItem() { item i = new item(myItems.Count + 1); myItems.Add(i); foreach (DataGridViewRow dr in dataGridView1.Rows) { if (dr.DataBoundItem == i) { dr.Selected = true; } } } private void btAdd_Click(object sender, EventArgs e) { addItem(); } private void dataGridView1_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Add) { addItem(); } if (e.KeyCode == Keys.I) { MessageBox.Show(((item)dataGridView1.CurrentRow.DataBoundItem).title); } } } public class item : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; private int _id; public int id { get { return _id; } set { this.title = "This is item number " + value.ToString(); _id = value; InvokePropertyChanged(new PropertyChangedEventArgs("id")); } } private string _title; public string title { get { return _title; } set { _title = value; InvokePropertyChanged(new PropertyChangedEventArgs("title")); } } public item(int id) { this.id = id; } #region Implementation of INotifyPropertyChanged public void InvokePropertyChanged(PropertyChangedEventArgs e) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) handler(this, e); } #endregion } }

    Read the article

  • Keeping video viewing statistics breakdown by video time in a database

    - by Septagram
    I need to keep a number of statistics about the videos being watched, and one of them is what parts of the video are being watched most. The design I came up with is to split the video into 256 intervals and keep the floating-point number of views for each of them. I receive the data as a number of intervals the user watched continuously. The problem is how to store them. There are two solutions I see. Row per every video segment Let's have a database table like this: CREATE TABLE `video_heatmap` ( `id` int(11) NOT NULL AUTO_INCREMENT, `video_id` int(11) NOT NULL, `position` tinyint(3) unsigned NOT NULL, `views` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `idx_lookup` (`video_id`,`position`) ) ENGINE=MyISAM Then, whenever we have to process a number of views, make sure there are the respective database rows and add appropriate values to the views column. I found out it's a lot faster if the existence of rows is taken care of first (SELECT COUNT(*) of rows for a given video and INSERT IGNORE if they are lacking), and then a number of update queries is used like this: UPDATE video_heatmap SET views = views + ? WHERE video_id = ? AND position >= ? AND position < ? This seems, however, a little bloated. The other solution I came up with is Row per video, update in transactions A table will look (sort of) like this: CREATE TABLE video ( id INT NOT NULL AUTO_INCREMENT, heatmap BINARY (4 * 256) NOT NULL, ... ) ENGINE=InnoDB Then, upon every time a view needs to be stored, it will be done in a transaction with consistent snapshot, in a sequence like this: If the video doesn't exist in the database, it is created. A row is retrieved, heatmap, an array of floats stored in the binary form, is converted into a form more friendly for processing (in PHP). Values in the array are increased appropriately and the array is converted back. Row is changed via UPDATE query. So far the advantages can be summed up like this: First approach Stores data as floats, not as some magical binary array. Doesn't require transaction support, so doesn't require InnoDB, and we're using MyISAM for everything at the moment, so there won't be any need to mix storage engines. (only applies in my specific situation) Doesn't require a transaction WITH CONSISTENT SNAPSHOT. I don't know what are the performance penalties of those. I already implemented it and it works. (only applies in my specific situation) Second approach Is using a lot less storage space (the first approach is storing video ID 256 times and stores position for every segment of the video, not to mention primary key). Should scale better, because of InnoDB's per-row locking as opposed to MyISAM's table locking. Might generally work faster because there are a lot less requests being made. Easier to implement in code (although the other one is already implemented). So, what should I do? If it wasn't for the rest of our system using MyISAM consistently, I'd go with the second approach, but currently I'm leaning to the first one. But maybe there are some reasons to favour one approach or another?

    Read the article

  • Only Execute Code on Certain Requests Java

    - by BillPull
    I am building a little API for class and the teacher supplied us with a link to a tutorial that provided a simple webserver that implements Runnable. I have already written some code that will parse arguments the arguments ( or at least get me the request string ) and some code that will return some simple xml. however I think certain requests like the one for the favicon are sent I think it is messing up my code. I wrapped that in an if else but it does not seem to be working. package server; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.Socket; import java.util.*; import java.io.*; import java.net.*; import parkinglots.*; public class WorkerRunnable implements Runnable{ protected Socket clientSocket = null; protected String serverText = null; public WorkerRunnable(Socket clientSocket, String serverText) { this.clientSocket = clientSocket; this.serverText = serverText; } public Boolean authenticateAPI(String key){ //Authenticate Key against Stored Keys //TODO: Create Stored Keys and Compare return true; } public void run() { try { InputStream input = clientSocket.getInputStream(); OutputStream output = clientSocket.getOutputStream(); long time = System.currentTimeMillis(); //TODO: Parse args and output different formats and Authentication //Parse URL Arguments BufferedReader in = new BufferedReader( new InputStreamReader(clientSocket.getInputStream(), "8859_1")); String request = in.readLine(); //Server gets Favicon Request so skip that and goto args System.out.println(request); if ( request != "GET /favicon.ico HTTP/1.1" && request != "GET / HTTP/1.1" && request != null ){ String format = "", apikey =""; System.out.println("I am Here"); String request_location = request.split(" ")[1]; String request_args = request_location.replace("/",""); request_args = request_args.replace("?",""); String[] queries = request_args.split("&"); System.out.println(queries[0]); for ( int i = 0; i < queries.length; i++ ){ if( queries[i] == "format" ){ format = queries[i].split("=")[1]; } else if( queries[i] == "apikey" ){ apikey = queries[i].split("=")[1]; } } if( apikey == "" ){ apikey = "None"; } if( format == "" ){ format = "xml"; } Boolean auth = authenticateAPI(apikey); if ( auth ){ if ( format == "xml"){ // Retrieve XML Document String xml = LotFromDB.getParkingLotXML(); output.write((xml).getBytes()); }else{ //Retrieve JSON String json = LotFromDB.getParkingLotJSON(); output.write((json).getBytes()); } }else{ output.write(("Access Denied - User is Not Authenticated").getBytes()); } }else{ output.write(("Access Denied Must Pass API Key").getBytes()); } output.close(); input.close(); System.out.println("Request processed: " + time); } catch (IOException e) { //report exceptions e.printStackTrace(); } } } Console output I get I am Here format=json Request processed: 1333516648331 GET /favicon.ico HTTP/1.1 I am Here favicon.ico Request processed: 1333516648332 It always returns the XML as well. This is my first exposure to writing a web server and dealing with networking in Java, which frustrates me a lot in general, So any suggestions here are very appreciated.

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • Installation of SAP Web Application Server 6.20 in Windows Vista

    - by karthikeyan b
    I tried installing trial version of sap netweaver 7.1 in windows vista in my laptop but i couldnt succeed there.. then i tried installing SAP WEB AS 6.20 now.I am able to succeed till 91% completion.After that i get some errors and the installation stops... if anybody have any experiences please share.it will be really helpful.I mentioned the complete log details below. nfo: INSTGUI.EXE Protocol version is 10. Message checksum is 7613888. Info: INSTGUI MessageFile Start message loading... Info: INSTGUI MessageFile Finished message loading. Info: InstController Prepare {} {} R3SETUP Version: Apr 24 2002 Info: InstController Prepare {} {} Logfile will be set to E:\R3SETUP\BSP.log Check E:\R3SETUP\BSP.log for further messages. Info: CommandFileController SyFileVersionSave {} {} Saving original content of file E:\R3SETUP\BSP.R3S ... Warning: CommandFileController SyFileCopy {} {} Function CopyFile() failed at location SyFileCopy-681 Warning: CommandFileController SyFileCopy {} {} errno: 5: Access is denied. Warning: CommandFileController SyFileCreateWithPermissions {} {} errno: 13: Permission denied Warning: CommandFileController SyPermissionSet {} {} Function SetNamedSecurityInfo() failed for E:\R3SETUP\BSP.R3S at location SyPermissionSet-2484 Warning: CommandFileController SyPermissionSet {} {} errno: 5: Access is denied. Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD KERNEL will not be copied. Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD DATA1 will not be copied. Info: CDSERVERBASE InternalColdKeyCheck 2 309 The CD DATA2 will not be copied. Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 checking host name lookup for 'Karthikeyan' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for 'Karthikeyan' is 'Karthikeyan'. Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 host 'Karthikeyan' has ip address '115.184.71.93' Info: CENTRDBINSTANCE_NT_ADA SyCheckHostnameLookup 2 333 offical host name for '115.184.71.93' is 'Karthikeyan'. Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Warning: ADADBINSTANCE_IND_ADA GetConfirmationFor 2 58 Cleanup database instance BSP for new installation. Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 54 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 54 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1146 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1146 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 718 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 718 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD KERNEL will not be copied. Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD DATA1 will not be copied. Info: CDSERVERBASE InternalWarmKeyCheck 2 309 The CD DATA2 will not be copied. Info: InstController MakeStepsDeliver 2 309 Requesting Information on CD-ROMs Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 309 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 309 CommandFile could not be updated Info: CENTRDBINSTANCE_NT_ADA InternalWarmKeyCheck 2 333 The installation phase is starting now. Please look in the log file for further information about current actions. Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 99 Defining Key Values Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 99 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 99 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 58 Requesting Setup Details Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 58 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 58 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 333 Requesting Installation Details Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 333 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 333 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1206 Setting Users for Single DB landscape Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1206 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1206 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 75 Stopping the SAP DB Instance Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 75 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 75 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 247 Stopping the SAP DB remote server Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 247 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 247 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 1195 Can not read from Z:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: SAPDBSETCDPATH_IND_IND and key: KERNEL_LOCATION Info: InstController MakeStepsDeliver 2 1195 Installing SAP DB Software Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1195 Installing SAP DB Software Info: SAPDBINSTALL_IND_ADA SyCoprocessCreate 2 1195 Creating coprocess E:\sapdb\NT\I386\sdbinst.exe ... Info: SAPDBINSTALL_IND_ADA ExecuteDo 2 1195 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1195 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1195 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 120 Extracting SAP DB Tools Software Info: ADAEXTRACTLCTOOLS_NT_ADA SyCoprocessCreate 2 120 Creating coprocess SAPCAR ... Info: ADAEXTRACTLCTOOLS_NT_ADA SyCoprocessCreate 2 120 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 120 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 120 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: ADAEXTRACTBSPCFG SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: ADAEXTRACTBSPCFG SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 242 Starting VSERVER Info: ADAXSERVER_NT_ADA SyCoprocessCreate 2 242 Creating coprocess C:\sapdb\programs\bin\x_server.exe ... Info: ADAXSERVER_NT_ADA ExecuteDo 2 242 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 242 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 242 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA1 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA1 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 816 Can not read from Y:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXEDBDATA2 and key: DATA1_LOCATION Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA2 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA2 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 816 Can not read from X:. Please ensure this path is accessible. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXEDBDATA3 and key: DATA2_LOCATION Info: InstController MakeStepsDeliver 2 816 Extracting the Database-Dependent SAP system Executables Info: EXTRACTSAPEXEDBDATA3 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXEDBDATA3 SyCoprocessCreate 2 816 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 816 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 816 CommandFile could not be updated Warning: CDSERVERBASE ConfirmKey 2 815 We tried to find the label SAPDB:MINI-WAS-DEMO:620:KERNEL for CD KERNEL in path E:. But the check was not successfull. Info: LvKeyRequest For further information see HTML documentation: step: EXTRACTSAPEXE and key: KERNEL_LOCATION Info: InstController MakeStepsDeliver 2 815 Extracting the SAP Executables Info: EXTRACTSAPEXE SyCoprocessCreate 2 815 Creating coprocess SAPCAR ... Info: EXTRACTSAPEXE SyCoprocessCreate 2 815 Creating coprocess SAPCAR ... Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 815 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 815 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 223 Setting new Rundirectory. Info: ADADBREGISTER_IND_ADA SyCoprocessCreate 2 223 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADADBREGISTER_IND_ADA ExecuteDo 2 223 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 223 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 223 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 ADASETDEVSPACES_IND_ADA Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 10 Performing Service BCHECK Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 10 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 10 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_DEFAULT_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_DEFAULT_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_COLD_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_COLD_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 263 Creating XUSER File for the User ADM for Dialog Instance Info: ADAXUSERSIDADM_WARM_NT_ADA SyCoprocessCreate 2 263 Creating coprocess C:\sapdb\programs\pgm\dbmcli.exe ... Info: ADAXUSERSIDADM_WARM_NT_ADA ExecuteDo 2 263 RC code form SyCoprocessWait = 0 . Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 263 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 263 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 760 Creating the Default Profile Info: DEFAULTPROFILE_IND_IND SyFileVersionSave 2 760 Saving original content of file C:\MiniWAS\DEFAULT.PFL ... Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 760 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 760 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1267 Modifying or Creating the TPPARAM File Info: TPPARAMMODIFY_NT_ADA SyFileVersionSave 2 1267 Saving original content of file C:\MiniWAS\trans\bin\TPPARAM ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1267 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1267 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1111 Creating the Service Entry for the Dispatcher Info: R3DISPATCHERPORT_IND_IND IaServicePortAppend 2 1111 Checking service name sapdp00, protocol tcp, port number 3200 ... Info: R3DISPATCHERPORT_IND_IND IaServicePortAppend 2 1111 Port name sapdp00 is known and the port number 3200 is equal to the existing port number 3200 Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1111 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1111 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1122 Creating the Service Entry for the Message Server Info: R3MESSAGEPORT_IND_IND IaServicePortAppend 2 1122 Checking service name sapmsBSP, protocol tcp, port number 3600 ... Info: R3MESSAGEPORT_IND_IND IaServicePortAppend 2 1122 Port name sapmsBSP is known and the port number 3600 is equal to the existing port number 3600 Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1122 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1122 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1114 Creating the Service Entry for the Gateway Service Info: R3GATEWAYPORT_IND_IND IaServicePortAppend 2 1114 Checking service name sapgw00, protocol tcp, port number 3300 ... Info: R3GATEWAYPORT_IND_IND IaServicePortAppend 2 1114 Port name sapgw00 is known and the port number 3300 is equal to the existing port number 3300 Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1114 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1114 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 PISTARTBSP Info: PISTARTBSP SyDirCreate 2 1 Checking existence of directory C:\Users\Karthikeyan\Desktop. If it does not exist creating it with user , group and permission 0 ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 1 PISTARTBSPGROUP Info: PISTARTBSPGROUP SyDirCreate 2 1 Checking existence of directory C:\Users\Karthikeyan\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Mini SAP Web Application Server. If it does not exist creating it with user , group and permission 0 ... Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Error: CommandFileController StoreMasterTableFromCommandFile 2 1 Command file could not be opened. Error: CommandFileController SetKeytableForSect 2 1 CommandFile could not be updated Info: InstController MakeStepsDeliver 2 54 Starting SAP DB to Mode WARM Error: Command

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Office 2010: It&rsquo;s not just DOC(X) and XLS(X)

    - by andrewbrust
    Office 2010 has released to manufacturing.  The bits have left the (product team’s) building.  Will you upgrade? This version of Office is officially numbered 14, a designation that correlates with the various releases, through the years, of Microsoft Word.  There were six major versions of Word for DOS, during whose release cycles came three 16-bit Windows versions.  Then, starting with Word 95 and counting through Word 2007, there have been six more versions – all for the 32-bit Windows platform.  Skip version 13 to ward off folksy bad luck (and, perhaps, the bugs that could come with it) and that brings us to version 14, which includes implementations for both 32- and 64-bit Windows platforms.  We’ve come a long way baby.  Or have we? As it does every three years or so, debate will now start to rage on over whether we need a “14th” version the PC platform’s standard word processor, or a “13th” version of the spreadsheet.  If you accept the premise of that question, then you may be on a slippery slope toward answering it in the negative.  Thing is, that premise is valid for certain customers and not others. The Microsoft Office product has morphed from one that offered core word processing, spreadsheet, presentation and email functionality to a suite of applications that provides unique, new value-added features, and even whole applications, in the context of those core services.  The core apps thus grow in mission: Excel is a BI tool.  Word is a collaborative editorial system for the production of publications.  PowerPoint is a media production platform for for live presentations and, increasingly, for delivering more effective presentations online.  Outlook is a time and task management system.  Access is a rich client front-end for data-driven self-service SharePoint applications.  OneNote helps you capture ideas, corral random thoughts in a semi-structured way, and then tie them back to other, more rigidly structured, Office documents. Google Docs and other cloud productivity platforms like Zoho don’t really do these things.  And there is a growing chorus of voices who say that they shouldn’t, because those ancillary capabilities are over-engineered, over-produced and “under-necessary.”  They might say Microsoft is layering on superfluous capabilities to avoid admitting that Office’s core capabilities, the ones people really need, have become commoditized. It’s hard to take sides in that argument, because different people, and the different companies that employ them, have different needs.  For my own needs, it all comes down to three basic questions: will the new version of Office save me time, will it make the mundane parts of my job easier, and will it augment my services to customers?  I need my time back.  I need to spend more of it with my family, and more of it focusing on my own core capabilities rather than the administrative tasks around them.  And I also need my customers to be able to get more value out of the services I provide. Help me triage my inbox, help me get proposals done more quickly and make them easier to read.  Let me get my presentations done faster, make them more effective and make it easier for me to reuse materials from other presentations.  And, since I’m in the BI and data business, help me and my customers manage data and analytics more easily, both on the desktop and online. Those are my criteria.  And, with those in mind, Office 2010 is looking like a worthwhile upgrade.  Perhaps it’s not earth-shattering, but it offers a combination of incremental improvements and a few new major capabilities that I think are quite compelling.  I provide a brief roundup of them here.  It’s admittedly arbitrary and not comprehensive, but I think it tells the Office 2010 story effectively. Across the Suite More than any other, this release of Office aims to give collaboration a real workout.  In certain apps, for the first time, documents can be opened simultaneously by multiple users, with colleagues’ changes appearing in near real-time.  Web-browser-based versions of Word, Excel, PowerPoint and OneNote will be available to extend collaboration to contributors who are off the corporate network. The ribbon user interface is now more pervasive (for example, it appears in OneNote and in Outlook’s main window).  It’s also customizable, allowing users to add, easily, buttons and options of their choosing, into new tabs, or into new groups within existing tabs. Microsoft has also taken the File menu (which was the “Office Button” menu in the 2007 release) and made it into a full-screen “Backstage” view where document-wide operations, like saving, printing and online publishing are performed. And because, more and more, heavily formatted content is cut and pasted between documents and applications, Office 2010 makes it easier to manage the retention or jettisoning of that formatting right as the paste operation is performed.  That’s much nicer than stripping it off, or adding it back, afterwards. And, speaking of pasting, a number of Office apps now make it especially easy to insert screenshots within their documents.  I know that’s useful to me, because I often document or critique applications and need to show them in action.  For the vast majority of users, I expect that this feature will be more useful for capturing snapshots of Web pages, but we’ll have to see whether this feature becomes popular.   Excel At first glance, Excel 2010 looks and acts nearly identically to the 2007 version.  But additional glances are necessary.  It’s important to understand that lots of people in the working world use Excel as more of a database, analytics and mathematical modeling tool than merely as a spreadsheet.  And it’s also important to understand that Excel wasn’t designed to handle such workloads past a certain scale.  That all changes with this release. The first reason things change is that Excel has been tuned for performance.  It’s been optimized for multi-threaded operation; previously lengthy processes have been shortened, especially for large data sets; more rows and columns are allowed and, for the first time, Excel (and the rest of Office) is available in a 64-bit version.  For Excel, this means users can take advantage of more than the 2GB of memory that the 32-bit version is limited to. On the analysis side, Excel 2010 adds Sparklines (tiny charts that fit into a single cell and can therefore be presented down an entire column or across a row) and Slicers (a more user-friendly filter mechanism for PivotTables and charts, which visually indicates what the filtered state of a given data member is).  But most important, Excel 2010 supports the new PowerPIvot add-in which brings true self-service BI to Office.  PowerPivot allows users to import data from almost anywhere, model it, and then analyze it.  Rather than forcing users to build “spreadmarts” or use corporate-built data warehouses, PowerPivot models function as true columnar, in-memory OLAP cubes that can accommodate millions of rows of data and deliver fast drill-down performance. And speaking of OLAP, Excel 2010 now supports an important Analysis Services OLAP feature called write-back.  Write-back is especially useful in financial forecasting scenarios for which Excel is the natural home.  Support for write-back is long overdue, but I’m still glad it’s there, because I had almost given up on it.   PowerPoint This version of PowerPoint marks its progression from a presentation tool to a video and photo editing and production tool.  Whether or not it’s successful in this pursuit, and if offering this is even a sensible goal, is another question. Regardless, the new capabilities are kind of interesting.  A greatly enhanced set of slide transitions with 3D effects; in-product photo and video editing; accommodation of embedded videos from services such as YouTube; and the ability to save a presentation as a video each lay testimony to PowerPoint’s transformation into a media tool and away from a pure presentation tool. These capabilities also recognize the importance of the Web as both a source for materials and a channel for disseminating PowerPoint output. Congruent with that is PowerPoint’s new ability to broadcast a slide presentation, using a quickly-generated public URL, without involving the hassle or expense of a Web meeting service like GoToMeeting or Microsoft’s own LiveMeeting.  Slides presented through this broadcast feature retain full color fidelity and transitions and animations are preserved as well.   Outlook Microsoft’s ubiquitous email/calendar/contact/task management tool gains long overdue speed improvements, especially against POP3 email accounts.  Outlook 2010 also supports multiple Exchange accounts, rather than just one; tighter integration with OneNote; and a new Social Connector providing integration with, and presence information from, online social network services like LinkedIn and Facebook (not to mention Windows Live).  A revamped conversation view now includes messages that are part of a given thread regardless of which folder they may be stored in. I don’t know yet how well the Social Connector will work or whether it will keep Outlook relevant to those who live on Facebook and LinkedIn.  But among the other features, there’s very little not to like.   OneNote To me, OneNote is the part of Office that just keeps getting better.  There is one major caveat to this, which I’ll cover in a moment, but let’s first catalog what new stuff OneNote 2010 brings.  The best part of OneNote, is the way each of its versions have managed hierarchy: Notebooks have sections, sections have pages, pages have sub pages, multiple notes can be contained in either, and each note supports infinite levels of indentation.  None of that is new to 2010, but the new version does make creation of pages and subpages easier and also makes simple work out of promoting and demoting pages from sub page to full page status.  And relationships between pages are quite easy to create now: much like a Wiki, simply typing a page’s name in double-square-brackets (“[[…]]”) creates a link to it. OneNote is also great at integrating content outside of its notebooks.  With a new Dock to Desktop feature, OneNote becomes aware of what window is displayed in the rest of the screen and, if it’s an Office document or a Web page, links the notes you’re typing, at the time, to it.  A single click from your notes later on will bring that same document or Web page back on-screen.  Embedding content from Web pages and elsewhere is also easier.  Using OneNote’s Windows Key+S combination to grab part of the screen now allows you to specify the destination of that bitmap instead of automatically creating a new note in the Unfiled Notes area.  Using the Send to OneNote buttons in Internet Explorer and Outlook result in the same choice. Collaboration gets better too.  Real-time multi-author editing is better accommodated and determining author lineage of particular changes is easily carried out. My one pet peeve with OneNote is the difficulty using it when I’m not one a Windows PC.  OneNote’s main competitor, Evernote, while I believe inferior in terms of features, has client versions for PC, Mac, Windows Mobile, Android, iPhone, iPad and Web browsers.  Since I have an Android phone and an iPad, I am practically forced to use it.  However, the OneNote Web app should help here, as should a forthcoming version of OneNote for Windows Phone 7.  In the mean time, it turns out that using OneNote’s Email Page ribbon button lets you move a OneNote page easily into EverNote (since every EverNote account gets a unique email address for adding notes) and that Evernote’s Email function combined with Outlook’s Send to OneNote button (in the Move group of the ribbon’s Home tab) can achieve the reverse.   Access To me, the big change in Access 2007 was its tight integration with SharePoint lists.  Access 2010 and SharePoint 2010 continue this integration with the introduction of SharePoint’s Access Services.  Much as Excel Services provides a SharePoint-hosted experience for viewing (and now editing) Excel spreadsheet, PivotTable and chart content, Access Services allows for SharePoint browser-hosted editing of Access data within the forms that are built in the Access client itself. To me this makes all kinds of sense.  Although it does beg the question of where to draw the line between Access, InfoPath, SharePoint list maintenance and SharePoint 2010’s new Business Connectivity Services.  Each of these tools provide overlapping data entry and data maintenance functionality. But if you do prefer Access, then you’ll like  things like templates and application parts that make it easier to get off the blank page.  These features help you quickly get tables, forms and reports built out.  To make things look nice, Access even gets its own version of Excel’s Conditional Formatting feature, letting you add data bars and data-driven text formatting.   Word As I said at the beginning of this post, upgrades to Office are about much more than enhancing the suite’s flagship word processing application. So are there any enhancements in Word worth mentioning?  I think so.  The most important one has to be the collaboration features.  Essentially, when a user opens a Word document that is in a SharePoint document library (or Windows Live SkyDrive folder), rather than the whole document being locked, Word has the ability to observe more granular locks on the individual paragraphs being edited.  Word also shows you who’s editing what and its Save function morphs into a sync feature that both saves your changes and loads those made by anyone editing the document concurrently. There’s also a new navigation pane that lets you manage sections in your document in much the same way as you manage slides in a PowerPoint deck.  Using the navigation pane, you can reorder sections, insert new ones, or promote and demote sections in the outline hierarchy.  Not earth shattering, but nice.   Other Apps and Summarized Findings What about InfoPath, Publisher, Visio and Project?  I haven’t looked at them yet.  And for this post, I think that’s fine.  While those apps (and, arguably, Access) cater to specific tasks, I think the apps we’ve looked at in this post service the general purpose needs of most users.  And the theme in those 2010 apps is clear: collaboration is key, the Web and productivity are indivisible, and making data and analytics into a self-service amenity is the way to go.  But perhaps most of all, features are still important, as long as they get you through your day faster, rather than adding complexity for its own sake.  I would argue that this is true for just about every product Microsoft makes: users want utility, not complexity.

    Read the article

  • FragmentStatePagerAdapter IllegalStateException: <MyFragment> is not currently in the FragmentManager

    - by Ixx
    I'm getting this on some cases, when going back to the activity, which uses a FragmentStatePagerAdapter, using back button of the devices. Not always. Not reproducible. I'm using support package v4, last revision (8). Already searched with google, no success finding a useful answer. Looking in the source, it's thrown here: FragmentManager.java @Override public void putFragment(Bundle bundle, String key, Fragment fragment) { if (fragment.mIndex < 0) { throw new IllegalStateException("Fragment " + fragment + " is not currently in the FragmentManager"); } bundle.putInt(key, fragment.mIndex); } But why is the index of fragment < 0 there? The code instantiating the fragments: @Override public Fragment getItem(int position) { Fragment fragment = null; switch(position) { case 0: fragment = Fragment1.newInstance(param1); break; case 1: fragment = MyFragment2.newInstance(param2, param3); break; } return fragment; } @Override public int getCount() { return 2; } I also can't debug through the source since the support package doesn't let me attach the source, for some reason... but that's a different question...

    Read the article

  • WPF TreeView databinding to hide/show expand/collapse icon

    - by Julian Lettner
    I implemented a WPF load-on-demand treeview like described in this (very good) article. In the mentioned solution a dummy element is used to preserve the expand + icon / treeview item behavior. The dummy item is replaced with real data, when the user clicks on the expander. I want to refine the model by adding a property public bool HasChildren { get { ... } } to my backing TreeNodeViewModel. Question: How can I bind this property to hide/show the expand icon (in XAML)? I am not able to find a suitable trigger/setter combination. (INotifyPropertyChanged is properly implemented.) Thanks for your time. Update 1: I want to use my property public bool HasChildren instead of using the dummy element. Determining whether or not an item has children is somewhat costly, but still much cheaper than fetching the children.

    Read the article

  • Activation Error while testing Exception Handling Application Block

    - by CletusLoomis
    I'm getting the following error while testing my EHAB implementation: {"Activation error occured while trying to get instance of type ExceptionPolicyImpl, key "LogPolicy""} System.Exception Stack Trace: StackTrace " at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 53 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance[TService](String key) in c:\Home\Chris\Projects\CommonServiceLocator\main\Microsoft.Practices.ServiceLocation\ServiceLocatorImplBase.cs:line 103 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.GetExceptionPolicy(Exception exception, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 131 at Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException(Exception exceptionToHandle, String policyName) in e:\Builds\EntLib\Latest\Source\Blocks\ExceptionHandling\Src\ExceptionHandling\ExceptionPolicy.cs:line 55 at Blackbox.Exception.ExceptionMain.LogException(Exception pException) in C:_Work_Black Box\Blackbox.Exception\ExceptionMain.vb:line 14 at BlackBox.Business.BusinessMain.TestExceptionHandling() in C:_Work_Black Box\BlackBox.Business\BusinessMain.vb:line 16 at Blackbox.Service.Service1.TestExceptionHandling() in C:_Work_Black Box\Blackbox.Service\Service.svc.vb:line 43" String Inner Exception: InnerException {"Resolution of the dependency failed, type = "Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl", name = "LogPolicy". Exception occurred while: Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter). Exception is: ArgumentException - Event log names must consist of printable characters and cannot contain \, *, ?, or spaces At the time of the exception, the container was: Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl,LogPolicy Resolving parameter "policyEntries" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyImpl(System.String policyName, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] policyEntries) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry,LogPolicy.All Exceptions Resolving parameter "handlers" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicyEntry(System.Type exceptionType, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.PostHandlingAction postHandlingAction, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] handlers, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Instrumentation.IExceptionHandlingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler,LogPolicy.All Exceptions.Logging Exception Handler (mapped from Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.IExceptionHandler, LogPolicy.All Exceptions.Logging Exception Handler) Resolving parameter "writer" of constructor Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler(System.String logCategory, System.Int32 eventId, System.Diagnostics.TraceEventType severity, System.String title, System.Int32 priority, System.Type formatterType, Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter writer) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl,LogWriter.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriter, (none)) Resolving parameter "structureHolder" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterImpl(Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder structureHolder, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator updateCoordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder,LogWriterStructureHolder.default (mapped from Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder, (none)) Resolving parameter "traceSources" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogWriterStructureHolder(System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.Filters.ILogFilter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] filters, System.Collections.Generic.IEnumerable1[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceSourceNames, System.Collections.Generic.IEnumerable1[[Microsoft.Practices.EnterpriseLibrary.Logging.LogSource, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]] traceSources, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource allEventsTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource notProcessedTraceSource, Microsoft.Practices.EnterpriseLibrary.Logging.LogSource errorsTraceSource, System.String defaultCategory, System.Boolean tracingEnabled, System.Boolean logWarningsWhenNoCategoriesMatch, System.Boolean revertImpersonation) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.LogSource,General Resolving parameter "traceListeners" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.LogSource(System.String name, System.Collections.Generic.IEnumerable1[[System.Diagnostics.TraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] traceListeners, System.Diagnostics.SourceLevels level, System.Boolean autoFlush, Microsoft.Practices.EnterpriseLibrary.Logging.Instrumentation.ILoggingInstrumentationProvider instrumentationProvider) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper,Event Log Listener (mapped from System.Diagnostics.TraceListener, Event Log Listener) Resolving parameter "wrappedTraceListener" of constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.ReconfigurableTraceListenerWrapper(System.Diagnostics.TraceListener wrappedTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging.ILoggingUpdateCoordinator coordinator) Resolving Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener,Event Log Listener?implementation (mapped from System.Diagnostics.TraceListener, Event Log Listener?implementation) Calling constructor Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener(System.String source, System.String log, System.String machineName, Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.ILogFormatter formatter) "} System.Exception My web.config is as follows: <?xml version="1.0"?> <configuration> <configSections> <section name="loggingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.LoggingSettings, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> <section name="exceptionHandling" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Configuration.ExceptionHandlingSettings, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" /> </configSections> <loggingConfiguration name="" tracingEnabled="true" defaultCategory="General"> <listeners> <add name="Event Log Listener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.FormattedEventLogTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" listenerDataType="Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.FormattedEventLogTraceListenerData, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" source="Enterprise Library Logging" formatter="Text Formatter" log="C:\Blackbox.log" machineName="." traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> </listeners> <formatters> <add type="Microsoft.Practices.EnterpriseLibrary.Logging.Formatters.TextFormatter, Microsoft.Practices.EnterpriseLibrary.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" template="Timestamp: {timestamp}{newline}&#xA;Message: {message}{newline}&#xA;Category: {category}{newline}&#xA;Priority: {priority}{newline}&#xA;EventId: {eventid}{newline}&#xA;Severity: {severity}{newline}&#xA;Title:{title}{newline}&#xA;Machine: {localMachine}{newline}&#xA;App Domain: {localAppDomain}{newline}&#xA;ProcessId: {localProcessId}{newline}&#xA;Process Name: {localProcessName}{newline}&#xA;Thread Name: {threadName}{newline}&#xA;Win32 ThreadId:{win32ThreadId}{newline}&#xA;Extended Properties: {dictionary({key} - {value}{newline})}" name="Text Formatter" /> </formatters> <categorySources> <add switchValue="All" name="General"> <listeners> <add name="Event Log Listener" /> </listeners> </add> </categorySources> <specialSources> <allEvents switchValue="All" name="All Events" /> <notProcessed switchValue="All" name="Unprocessed Category" /> <errors switchValue="All" name="Logging Errors &amp; Warnings"> <listeners> <add name="Event Log Listener" /> </listeners> </errors> </specialSources> </loggingConfiguration> <exceptionHandling> <exceptionPolicies> <add name="LogPolicy"> <exceptionTypes> <add name="All Exceptions" type="System.Exception, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="NotifyRethrow"> <exceptionHandlers> <add name="Logging Exception Handler" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging.LoggingExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" logCategory="General" eventId="100" severity="Error" title="Enterprise Library Exception Handling" formatterType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.TextExceptionFormatter, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling" priority="0" /> </exceptionHandlers> </add> </exceptionTypes> </add> <add name="WcfExceptionShielding"> <exceptionTypes> <add name="InvalidOperationException" type="System.InvalidOperationException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="ThrowNewException"> <exceptionHandlers> <add type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF.FaultContractExceptionHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.WCF, Version=5.0.414.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" exceptionMessageResourceType="" exceptionMessageResourceName="This is the message" exceptionMessage="This is the exception" faultContractType="Blackbox.Service.WCFFault, Blackbox.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" name="Fault Contract Exception Handler"> <mappings> <add source="{Guid}" name="Id" /> <add source="{Message}" name="MessageText" /> </mappings> </add> </exceptionHandlers> </add> </exceptionTypes> </add> </exceptionPolicies> </exceptionHandling> <connectionStrings> <add name="CompassEntities" connectionString="metadata=~\bin\CompassModel.csdl|~\bin\CompassModel.ssdl|~\bin\CompassModel.msl;provider=Devart.Data.Oracle;provider connection string=&quot;User Id=foo;Password=foo;Server=foo64mo;Home=OraClient11g_home1;Persist Security Info=True&quot;" providerName="System.Data.EntityClient" /> <add name="BlackboxEntities" connectionString="metadata=~\bin\BlackboxModel.csdl|~\bin\BlackboxModel.ssdl|~\bin\BlackboxModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=sqldev1\cps;Initial Catalog=FundServ;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> <system.web> <compilation debug="true" strict="false" explicit="true" targetFramework="4.0" /> </system.web> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> <system.webServer> <modules runAllManagedModulesForAllRequests="true"/> </system.webServer> </configuration> My code is as follows: Public Shared Function LogException(ByVal pException As System.Exception) As Boolean Return ExceptionPolicy.HandleException(pException, "LogPolicy") End Function Any assistance is appreciated.

    Read the article

  • Active and Passive Federation in WIF

    - by user102533
    I am trying to understand the difference between Active and Passive federation in WIF. It appears that one would use an Active Federation if the Relying Party (RP) is a WCF Service instead of an ASP.NET application and a Passive Federation if the RP is an ASP.NET application. Is this accurate? So, in a scenario in which an ASP.NET application uses a WCF in the backend, the MS articles suggest using a 'bootstrap' security token that is obtained by the ASP.NET app using an ActAs STS and this token is used to authenticate with the WCF. In this scenario, it appears that we are doing a combination of Active (user - STS - ASP.NET RP) and Passive (ASP.NET - ActAs STS - WCF) Federation?

    Read the article

  • SSH / SFTP connection issue using Tamir.SharpSsh

    - by jinsungy
    This is my code to connect and send a file to a remote SFTP server. public static void SendDocument(string fileName, string host, string remoteFile, string user, string password) { Scp scp = new Scp(); scp.OnConnecting += new FileTansferEvent(scp_OnConnecting); scp.OnStart += new FileTansferEvent(scp_OnProgress); scp.OnEnd += new FileTansferEvent(scp_OnEnd); scp.OnProgress += new FileTansferEvent(scp_OnProgress); try { scp.To(fileName, host, remoteFile, user, password); } catch (Exception e) { throw e; } } I can successfully connect, send and receive files using CoreFTP. Thus, the issue is not with the server. When I run the above code, the process seems to stop at the scp.To method. It just hangs indefinitely. Anyone know what might my problem be? Maybe it has something to do with adding the key to the a SSH Cache? If so, how would I go about this? EDIT: I inspected the packets using wireshark and discovered that my computer is not executing the Diffie-Hellman Key Exchange Init. This must be the issue. EDIT: I ended up using the following code. Note, the StrictHostKeyChecking was turned off to make things easier. JSch jsch = new JSch(); jsch.setKnownHosts(host); Session session = jsch.getSession(user, host, 22); session.setPassword(password); System.Collections.Hashtable hashConfig = new System.Collections.Hashtable(); hashConfig.Add("StrictHostKeyChecking", "no"); session.setConfig(hashConfig); try { session.connect(); Channel channel = session.openChannel("sftp"); channel.connect(); ChannelSftp c = (ChannelSftp)channel; c.put(fileName, remoteFile); c.exit(); } catch (Exception e) { throw e; } Thanks.

    Read the article

  • C# Ordered dictionary index

    - by Martijn
    I am considering of using the OrderedDictionary. As a key I want to use a long value (id) and the value will be a custom object. I use the OrderedDictionary because I want to get an object by it's Id and I want to get an object by it's 'collection' index. I want to use the OrderedDictionary like this: public void AddObject(MyObject obj) { _dict.Add(obj.Id, obj); // dict is declared as OrderedDictionary _dict = new OrderedDictionary(); } Somewhere else in my code I have something similar like this: public MyObject GetNextObject() { /* In my code keep track of the current index */ _currentIndex++; // check _currentindex doesn't exceed the _questions bounds return _dict[_currentIndex] as MyObject; } Now my question is. In the last method I've used an index. Imagine _currentIndex is set to 10, but I have also an object with an id of 10. I've set the Id as a key. The Id of MyObject is of type long?. Does this goes wrong?

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • bind a WPF datagrid to a datatable

    - by Jim Thomas
    I have used the marvelous example posted at: http://www.codeproject.com/KB/WPF/WPFDataGridExamples.aspx to bind a WPF datagrid to a datatable. The source code below compiles fine; it even runs and displays the contents of the InfoWork datatable in the wpf datagrid. Hooray! But the WPF page with the datagrid will not display in the designer. I get an incomprehensible error instead on my design page which is shown at the end of this posting. I assume the designer is having some difficulty instantiating the dataview for display in the grid. How can I fix that? XAML Code: xmlns:local="clr-namespace:InfoSeeker" <Window.Resources> <ObjectDataProvider x:Key="InfoWorkData" ObjectType="{x:Type local:InfoWorkData}" /> <ObjectDataProvider x:Key="InfoWork" ObjectInstance="{StaticResource InfoWorkData}" MethodName="GetInfoWork" /> </Window.Resources> <my:DataGrid DataContext="{Binding Source={StaticResource InfoWork}}" AutoGenerateColumns="True" ItemsSource="{Binding}" Name="dataGrid1" xmlns:my="http://schemas.microsoft.com/wpf/2008/toolkit" /> C# Code: namespace InfoSeeker { public class InfoWorkData { private InfoTableAdapters.InfoWorkTableAdapter infoAdapter; private Info infoDS; public InfoWorkData() { infoDS = new Info(); infoAdapter = new InfoTableAdapters.InfoWorkTableAdapter(); infoAdapter.Fill(infoDS.InfoWork); } public DataView GetInfoWork() { return infoDS.InfoWork.DefaultView; } } } Error shown in place of the designer page which has the grid on it: An unhandled exception has occurred: Type 'MS.Internal.Permissions.UserInitiatedNavigationPermission' in Assembly 'PresentationFramework, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' is not marked as serializable. at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(RuntimeType type) at System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, ISurrogateSelector surrogateSelector, StreamingContext context, SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, ObjectWriter objectWriter) ...At:Ms.Internal.Designer.DesignerPane.LoadDesignerView()

    Read the article

  • Adaptive Threshold in OpenCV (Version 1 - the swig version)

    - by Neil Benn
    Hello I'm trying to get adaptive thresholding working in the python binding to opencv (the swig one - cannot get opencv 2.0 working as I am using a beagleboard as the cross compiling is not working yet). I have a greyscale image (ccg.jpg) and the following code import opencv from opencv import highgui img = highgui.cvLoadImage("ccg.png") img_bw = opencv.cvCreateImage(opencv.cvGetSize(img), opencv.IPL_DEPTH_8U, 1) opencv.cvAdaptiveThreshold(img, img_bw, 125, opencv.CV_ADAPTIVE_THRESH_MEAN_C, opencv.CV_THRESH_BINARY, 7, 10) When I run this I get the error: RuntimeError: openCV Error: Status=Formats of input arguments do not match function name=cvAdaptiveThreshold error messgae= file_name=cvadapthresh.cpp line=122 I've also tried having both the source and dest arguments both the same (greyscale) and I get the error 'Unsupported format or combination of formats'. Does anyone have any clues as to where I could be going wrong? Cheers, Neil

    Read the article

  • adding UIImageView to UIScrollView throws exception cocoa touch for iPad

    - by Brodie4598
    I am a noob at OBJ-C :) I am trying to add a UIImageView to a UIScrollView to display a large image in my iPad app. I have followed the tutorial here exactly: http://howtomakeiphoneapps.com/2009/12/how-to-use-uiscrollview-in-your-iphone-app/ The only difference is that in my App the View is in a seperate tab and I am using a different image. here is my code: - (void)viewDidLoad { [super viewDidLoad]; UIImageView *tempImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"Cheyenne81"]]; self.imageView = tempImageView; [tempImageView release]; scrollView.contentSize = CGSizeMake(imageView.frame.size.width, imageView.frame.size.height); scrollView.maximumZoomScale = 4.0; scrollView.minimumZoomScale = 0.75; scrollView.clipsToBounds = YES; scrollView.delegate = self; [scrollView addSubview:imageView]; } - (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView{ return imageView; } and: @interface UseScrollViewViewController : UIViewController<UIScrollViewDelegate>{ IBOutlet UIScrollView *scrollView; UIImageView *imageView; } @property (nonatomic, retain) UIScrollView *scrollView; @property (nonatomic, retain) UIImageView *imageView; @end I then create a UIScrollView in Interface Builder and link it to the scrollView outlet. Thats when I get the problem. When I run the program it crashes instantly. If I run it without linking the scrollView to the outlet, it will run (allbeit with a blnk screen). The following is the error I get in the console: 2010-03-27 20:18:13.467 UseScrollViewViewController[7421:207] * Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[ setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key scrollView.'

    Read the article

  • InvalidOperationException when calling SaveChanges in .NET Entity framework

    - by Pär Björklund
    Hi, I'm trying to learn how to use the Entity framework but I've hit an issue I can't solve. What I'm doing is that I'm walking through a list of Movies that I have and inserts each one into a simple database. This is the code I'm using private void AddMovies(DirectoryInfo dir) { MovieEntities db = new MovieEntities(); foreach (DirectoryInfo d in dir.GetDirectories()) { Movie m = new Movie { Name = d.Name, Path = dir.FullName }; db.AddToMovies(movie); } db.SaveChanges(); } When I do this I get an exception at db.SaveChanges() that read. The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. I haven't been able to find out what's causing this issue. My database table contains three columns Id int autoincrement Name nchar(255) Path nchar(255) Update: I Checked my edmx file and the SSDL section have the StoreGeneratedPattern="Identity" as suggested. I also followed the blog post and tried to add ClientAutoGenerated="true" and StoreGenerated="true" in the CSDL as suggested there. This resulted in compile errors ( Error 5: The 'ClientAutoGenerated' attribute is not allowed.). Since the blog post is from 2006 and it has a link to a follow up post I assume it's been changed. However, I cannot read the followup post since it seems to require an msdn account.

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >