Search Results

Search found 37400 results on 1496 pages for 'image control'.

Page 524/1496 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • UTF-8 GET using Indy 10.5.8.0 and Delphi XE2

    - by Bogdan Botezatu
    I'm writing my first Unicode application with Delphi XE2 and I've stumbled upon an issue with GET requests to an Unicode URL. Shortly put, it's a routine in a MP3 tagging application that takes a track title and an artist and queries Last.FM for the corresponding album, track no and genre. I have the following code: function GetMP3Info(artist, track: string) : TMP3Data //<---(This is a record) var TrackTitle, ArtistTitle : WideString; webquery : WideString; [....] WebQuery := UTF8Encode('http://ws.audioscrobbler.com/2.0/?method=track.getcorrection&api_key=' + apikey + '&artist=' + artist + '&track=' + track); //[processing the result in the web query, getting the correction for the artist and title] // eg: for artist := Bucovina and track := Mestecanis, the corrected values are //ArtistTitle := Bucovina; // TrackTitle := Mestecani?; //Now here is the tricky part: webquery := UTF8Encode('http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=' + apikey + '&artist=' + unescape(ArtistTitle) + '&track=' + unescape(TrackTitle)); //the unescape function replaces spaces (' ') with '+' to comply with the last.fm requests [some more processing] end; The webquery looks in a TMemo just right (http://ws.audioscrobbler.com/2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestecani?) Yet, when I try to send a GET() to the webquery using IdHTTP (with the ContentEncoding property set to 'UTF-8'), I see in Wireshark that the component is GET-ing the data to the ANSI value '/2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestec?ni?' Here is the full headers for the GET requests and responses: GET /2.0/?method=track.getInfo&api_key=e5565002840xxxxxxxxxxxxxx23b98ad&artist=Bucovina&track=Mestec?ni? HTTP/1.1 Content-Encoding: UTF-8 Host: ws.audioscrobbler.com Accept: text/html, */* Accept-Encoding: identity User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.23) Gecko/20110920 Firefox/3.6.23 SearchToolbar/1.22011-10-16 20:20:07 HTTP/1.0 400 Bad Request Date: Tue, 09 Oct 2012 20:46:31 GMT Server: Apache/2.2.22 (Unix) X-Web-Node: www204 Access-Control-Allow-Origin: * Access-Control-Allow-Methods: POST, GET, OPTIONS Access-Control-Max-Age: 86400 Cache-Control: max-age=10 Expires: Tue, 09 Oct 2012 20:46:42 GMT Content-Length: 114 Connection: close Content-Type: text/xml; charset=utf-8; <?xml version="1.0" encoding="utf-8"?> <lfm status="failed"> <error code="6"> Track not found </error> </lfm> The question that puzzles me is am I overseeing anything related to setting the property of the tidhttp control? How can I stop the well-formated URL i'm composing in the application from getting wrongfully sent to the server? Thanks.

    Read the article

  • plupload not working in wordpress theme files

    - by Kedar B
    This is my code for image upload.... <a id="aaiu-uploader" class="aaiu_button" href="#"><?php _e('*Select Images (mandatory)','wpestate');?></a> <input type="hidden" name="attachid" id="attachid" value="<?php echo $attachid;?>"> <input type="hidden" name="attachthumb" id="attachthumb" value="<? php echo $thumbid;?>"> i want upload functionality more than one time in single page in wordpress.i have add js code for that same as first upload block but its not working. This is js code for image upload.... jQuery(document).ready(function($) { "use strict"; if (typeof(plupload) !== 'undefined') { var uploader = new plupload.Uploader(ajax_vars.plupload); uploader.init(); uploader.bind('FilesAdded', function (up, files) { $.each(files, function (i, file) { // console.log('append'+file.id ); $('#aaiu-upload-imagelist').append( '<div id="' + file.id + '">' + file.name + ' (' + plupload.formatSize(file.size) + ') <b></b>' + '</div>'); }); up.refresh(); // Reposition Flash/Silverlight uploader.start(); }); uploader.bind('UploadProgress', function (up, file) { $('#' + file.id + " b").html(file.percent + "%"); }); // On erro occur uploader.bind('Error', function (up, err) { $('#aaiu-upload-imagelist').append("<div>Error: " + err.code + ", Message: " + err.message + (err.file ? ", File: " + err.file.name : "") + "</div>" ); up.refresh(); // Reposition Flash/Silverlight }); uploader.bind('FileUploaded', function (up, file, response) { var result = $.parseJSON(response.response); // console.log(result); $('#' + file.id).remove(); if (result.success) { $('#profile-image').css('background-image','url("'+result.html+'")'); $('#profile-image').attr('data-profileurl',result.html); $('#profile-image_id').val(result.attach); var all_id=$('#attachid').val(); all_id=all_id+","+result.attach; $('#attachid').val(all_id); $('#imagelist').append('<div class="uploaded_images" data- imageid="'+result.attach+'"><img src="'+result.html+'" alt="thumb" /><i class="fa deleter fa-trash-o"></i> </div>'); delete_binder(); thumb_setter(); } }); $('#aaiu-uploader').click(function (e) { uploader.start(); e.preventDefault(); }); $('#aaiu-uploader2').click(function (e) { uploader.start(); e.preventDefault(); }); } });

    Read the article

  • Scrolling RelativeLayout- white border over part of the content

    - by Tanis.7x
    I have a fairly simply Fragment that adds a handful of colored ImageViews to a RelativeLayout. There are more images than can fit on screen, so I implemented some custom scrolling. However, When I scroll around, I see that there is an approximately 90dp white border overlapping part of the content right where the edges of the screen are before I scroll. It is obvious that the ImageViews are still being created and drawn properly, but they are being covered up. How do I get rid of this? I have tried: Changing both the RelativeLayout and FrameLayout to WRAP_CONTENT, FILL_PARENT, MATCH_PARENT, and a few combinations of those. Setting the padding and margins of both layouts to 0dp. Example: Fragment: public class MyFrag extends Fragment implements OnTouchListener { int currentX; int currentY; RelativeLayout container; final int[] colors = {Color.BLACK, Color.RED, Color.BLUE}; @Override public View onCreateView(LayoutInflater inflater, ViewGroup fragContainer, Bundle savedInstanceState) { return inflater.inflate(R.layout.fragment_myfrag, null); } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); container = (RelativeLayout) getView().findViewById(R.id.container); container.setOnTouchListener(this); // Temp- Add a bunch of images to test scrolling for(int i=0; i<1500; i+=100) { for (int j=0; j<1500; j+=100) { int color = colors[(i+j)%3]; ImageView image = new ImageView(getActivity()); image.setScaleType(ImageView.ScaleType.CENTER); image.setBackgroundColor(color); LayoutParams lp = new RelativeLayout.LayoutParams(100, 100); lp.setMargins(i, j, 0, 0); image.setLayoutParams(lp); container.addView(image); } } } @Override public boolean onTouch(View v, MotionEvent event) { switch (event.getAction()) { case MotionEvent.ACTION_DOWN: { currentX = (int) event.getRawX(); currentY = (int) event.getRawY(); break; } case MotionEvent.ACTION_MOVE: { int x2 = (int) event.getRawX(); int y2 = (int) event.getRawY(); container.scrollBy(currentX - x2 , currentY - y2); currentX = x2; currentY = y2; break; } case MotionEvent.ACTION_UP: { break; } } return true; } } XML: <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="fill_parent" android:layout_height="fill_parent" tools:context=".FloorPlanFrag"> <RelativeLayout android:id="@+id/container" android:layout_width="fill_parent" android:layout_height="fill_parent" /> </FrameLayout>

    Read the article

  • Editing Neo Slider to move in one direction

    - by user2106416
    I have a website with brandspace theme from pixelentity. Currently, the slider move from right to left ( auto transition) . But, when the slider reaches the last image, it will go back to the first image by making all previous images sliding back from left to right. I want to make the slider to have a normal transition,i.e. going back to first image but still with the image sliding from right to left. Below is the slider.php code.: <?php $t =& peTheme(); ?> <?php list($pid,$conf,$loop) = $t->template->data(); ?> <?php $w = $t->media->width(940); ?> <?php $h = $t->media->height(300); ?> <div class="peSlider peVolo" data-autopause="disabled" data-plugin="<?php echo apply_filters("pe_theme_slider_plugin","peVolo"); ?>" data-controls-arrows="edges-full" data-controls-bullets="disabled" data-icon-font="enabled"> <?php while ($slide =& $loop->next()): ?> <?php $link = empty($slide->link) ? false: $slide->link; ?> <?php $img = $t->image->resizedImg($slide->img,$w,$h); ?> <div data-delay="4" <?php echo $slide->idx == 0 ? ' class="visible"' : ''; ?>> <?php if (isset($slide->caption)) $t->slider->caption($slide->caption); ?> <?php if ($link): ?> <a href="<?php echo $link ?>" data-flare-gallery="fsGallery<?php echo $pid ?>"> <?php echo $img; ?> </a> <?php else: ?> <?php echo $img; ?> <?php endif; ?> </div> <?php endwhile; ?> </div>

    Read the article

  • How To Load Images into Custom UITableViewCell?

    - by Clifton Burt
    This problem is simple, but crucial and urgent. Here's what needs to be done: load 66px x 66px images into the table cells in the MainViewController table. each TableCell has a unique image. But how? Would we use cell.image?... cell.image = [UIImage imageNamed:@"image.png"]; If so, where? Is an if/else statement required? Help? Here's the project code, hosted on Google Code, for easy and quick reference... http://www.google.com/codesearch/p?hl=en#Fcn2OtVUXnY/trunk/apple-sample-code/NavBar/NavBar/MyCustomCell.m&q=MyCustomCell%20lang:objectivec To load each cell's labels, MainViewController uses an NSDictionary and NSLocalizedString like so... //cell one menuList addObject:[NSDictionary dictionaryWithObjectsAndKeys: NSLocalizedString(@"PageOneTitle", @""), kTitleKey, NSLocalizedString(@"PageOneExplain", @""), kExplainKey, nil]]; //cell two menuList addObject:[NSDictionary dictionaryWithObjectsAndKeys: NSLocalizedString(@"PageOneTitle", @""), kTitleKey, NSLocalizedString(@"PageOneExplain", @""), kExplainKey, nil]]; ... // this is where MainViewController loads the cell content - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { MyCustomCell *cell = (MyCustomCell*)[tableView dequeueReusableCellWithIdentifier:kCellIdentifier]; if (cell == nil) { cell = [[[MyCustomCell alloc] initWithFrame:CGRectZero reuseIdentifier:kCellIdentifier] autorelease]; } ... // MyCustomCell.m adds the subviews - (id)initWithFrame:(CGRect)aRect reuseIdentifier:(NSString *)identifier { self = [super initWithFrame:aRect reuseIdentifier:identifier]; if (self) { // you can do this here specifically or at the table level for all cells self.accessoryType = UITableViewCellAccessoryDisclosureIndicator; // Create label views to contain the various pieces of text that make up the cell. // Add these as subviews. nameLabel = [[UILabel alloc] initWithFrame:CGRectZero]; // layoutSubViews will decide the final frame nameLabel.backgroundColor = [UIColor clearColor]; nameLabel.opaque = NO; nameLabel.textColor = [UIColor blackColor]; nameLabel.highlightedTextColor = [UIColor whiteColor]; nameLabel.font = [UIFont boldSystemFontOfSize:18]; [self.contentView addSubview:nameLabel]; explainLabel = [[UILabel alloc] initWithFrame:CGRectZero]; // layoutSubViews will decide the final frame explainLabel.backgroundColor = [UIColor clearColor]; explainLabel.opaque = NO; explainLabel.textColor = [UIColor grayColor]; explainLabel.highlightedTextColor = [UIColor whiteColor]; explainLabel.font = [UIFont systemFontOfSize:14]; [self.contentView addSubview:explainLabel]; //added to mark where the thumbnail image should go imageView = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 66, 66)]; [self.contentView addSubview:imageView]; } return self; } HELP?

    Read the article

  • How do I stack images to simulate depth using Core Animation?

    - by Jeffrey Berthiaume
    I have a series of UIImages with which I need to simulate depth. I can't use scaling because I need to be able to rotate the parent view, and the images should look like they're stacked visibly in front of each other, not on the same plane. I made a new ViewController-based project and put this in the viewDidLoad (as well as attached three 120x120 pixel images named 1.png, 2.png, and 3.png): - (void)viewDidLoad { // display image 3 UIImageView *three = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"3.png"]]; three.center = CGPointMake(160 + 60, 240 - 60); [self.view addSubview:three]; // rotate image 3 around the z axis // THIS IS INCORRECT CATransform3D theTransform = three.layer.transform; theTransform.m34 = 1.0 / -1000; three.layer.transform = theTransform; // display image 2 UIImageView *two = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"2.png"]]; two.center = CGPointMake(160, 240); [self.view addSubview:two]; // display image 1 UIImageView *one = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"1.png"]]; one.center = CGPointMake(160 - 60, 240 + 60); [self.view addSubview:one]; // rotate image 3 around the z axis // THIS IS INCORRECT theTransform = one.layer.transform; theTransform.m34 = 1.0 / 1000; one.layer.transform = theTransform; // release the images [one release]; [two release]; [three release]; // rotate the parent view around the y axis theTransform = self.view.layer.transform; theTransform.m14 = 1.0 / -500; self.view.layer.transform = theTransform; [super viewDidLoad]; } I have very specific reasons why I'm not using an EAGLView and why I'm not loading the images as CALayers (i.e. why I'm using UIImageViews for each one). This is just a quick demo that I can use to work out exactly what I need in my parent application. Is there some matrix way to translate these 2d images along the z-axis so they will look like what I'm trying to represent? I've gone through the other StackOverflow articles as well as the Wikipedia references, and have not found what I'm looking for -- although I might not necessarily be using the right terms for what I'm trying to do.

    Read the article

  • Hierarchy inheritance

    - by reito
    I had faced the problem. In my C++ hierarchy tree I have two branches for entities of difference nature, but same behavior - same interface. I created such hierarchy trees (first in image below). And now I want to work with Item or Base classes independetly of their nature (first or second). Then I create one abstract branch for this use. My mind build (second in image below). But it not working. Working scheme seems (third in image below). It's bad logic, I think... Do anybody have some ideas about such hierarchy inheritance? How make it more logical? More simple for understanding? Image Sorry for my english - russian internet didn't help:) Update: You ask me to be more explicit, and I will be. In my project (plugins for Adobe Framemaker) I need to work with dialogs and GUI controls. In some places I working with WinAPI controls, and some other places with FDK (internal Framemaker) controls, but I want to work throw same interface. I can't use one base class and inherite others from it, because all needed controls - is a hierarchy tree (not one class). So I have one hierarchy tree for WinAPI controls, one for FDK and one abstract tree to use anyone control. For example, there is an Edit control (WinEdit and FdkEdit realization), a Button control (WinButton and FdkButton realization) and base entity - Control (WinControl and FdkControl realization). For now I can link my classes in realization trees (Win and Fdk) with inheritence between each of them (WinControl is base class for WinButton and WinEdit; FdkControl is base class for FdkButton and FdkEdit). And I can link to abstract classes (Control is base class for WinControl and FdkControl; Edit is base class for WinEdit and FdkEdit; Button is base class for WinButton and FdkButton). But I can't link my abstract tree - compiler swears. In fact I have two hierarchy trees, that I want to inherite from another one. Update: I have done this quest! :) I used the virtual inheritence and get such scheme (http://img12.imageshack.us/img12/7782/99614779.png). Abstract tree has only absolute abstract methods. All inheritence in abstract tree are virtual. Link from realization tree to abstract are virtual. On image shown only one realization tree for simplicity. Thanks for help!

    Read the article

  • How to replicate this button in CSS

    - by jasondavis
    I am trying to create a CSS theme switcher button like below. The top image shows what I have so far and the bottom image shows what I am trying to create. I am not the best at this stuff I am more of a back-end coder. I could really use some help. I have a live demo of the code here http://dabblet.com/gist/2230656 Just looking at what I have and the goal image, some differences. I need to add a gradient The border is not right on mine Radius is a little off Possibly some other stuff? Also here is the code...it can be changed anyway to improve this, the naming and stuff could be improved I am sure but I can use any help I can get. HTML <div class="switch-wrapper"> <div class="switcher left selected"> <span id="left">....</span> </div> <div class="switcher right"> <span id="right">....</span> </div> </div> CSS /* begin button styles */ .switch-wrapper{ width:400px; margin:220px; } .switcher { background:#507190; display: inline-block; max-width: 100%; box-shadow: 1px 1px 1px rgba(0,0,0,.3); position:relative; } #left, #right{ width:17px; height:11px; overflow:hidden; position:absolute; top:50%; left:50%; margin-top:-5px; margin-left:-8px; font: 0/0 a; } #left{ background-image: url(http://www.codedevelopr.com/assets/images/switcher.png); background-position: 0px px; } #right{ background-image: url(http://www.codedevelopr.com/assets/images/switcher.png); background-position: -0px -19px; } .left, .right{ width: 30px; height: 25px; border: 1px solid #3C5D7E; } .left{ border-radius: 6px 0px 0px 6px; } .right{ border-radius: 0 6px 6px 0; margin: 0 0 0 -6px } .switcher:hover, .selected { background: #27394b; box-shadow: -1px 1px 0px rgba(255,255,255,.4), inset 0 4px 5px rgba(0,0,0,.6), inset 0 1px 2px rgba(0,0,0,.6); }

    Read the article

  • Draw rectangle-like objects on a bitmap

    - by _simon_
    I am performing OCR (optical character recognition) on a bunch of images. Images are grouped into different projects (tickets, credit cards, insurance cards etc). Each image represents an actual product (for instance, if we have images of credit cards, picture1.jpg is image of my credit card, picture2.jpg is image of your credit card,... you get it). I have a settings.xml file, which contains regions of the image, where OCR should be performed. Example: <Project Name="Ticket1" TemplateImage="...somePath/templateTicket1.jpg"> <Region Name="Prefix" NumericOnly="false" Rotate="0"> <x>470</x> <y>395</y> <width>31</width> <height>36</height> </Region> <Region Name="Num1" NumericOnly="true" Rotate="0"> <x>555</x> <y>402</y> <width>123</width> <height>35</height> </Region> </Project> </Project Name="CreditCard" TemplateImage="...somePath/templateCreditCard1.jpg"> <Region Name="SerialNumber" NumericOnly="false" Rotate="90"> <x>332</x> <y>12</y> <width>20</width> <height>98</height> </Project> I would like to set these parameters through GUI (now I just write them into xml file). So, first I load a template image for a project (an empty credit card). Then I would like to draw a rectangle around a text, where OCR should be performed. I guess this isn't hard, but it would be great if I could also move and resize this rectangle object in the picture. I have to display all regions (rectangles) on the picture also. Also - there will probably be a list of regions in a listview, so when you click a region in this listview, it should mark it on the picture in a green color for example. Do you know for a library, which I could use? Or a link with some tips how to create such objects?

    Read the article

  • driver.findElement() with iframe and elements without ID

    - by user1864657
    Java Code: driver.switchTo().frame(0); WebElement elemText = driver.findElement(By.xpath("/html/body[contains(@class='forum')]")); //WebElement elemText = driver.findElement(By.xpath("//td[@id='cke_contents_vB_Editor_001_editor']/textarea")); elemText.sendKeys(message); elemText.submit(); forumLink = driver.getCurrentUrl(); HTML Code: <td id="cke_contents_vB_Editor_001_editor" class="cke_contents" style="height:1726px" role="presentation"> <iframe style="width:100%;height:100%" frameborder="0" title="Rich text editor, vB_Editor_001_editor, press ALT 0 for help." src="" tabindex="-1" allowtransparency="true"> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html dir="ltr" lang="en" contenteditable="true"> <head> <title data-cke-title="Rich text editor, vB_Editor_001_editor, press ALT 0 for help.">Rich text editor, vB_Editor_001_editor, press ALT 0 for help.</title> <base href="http://fairplay.garena.com/" data-cke-temp="1"> <link type="text/css" rel="stylesheet" href="http://fairplay.garena.com/clientscript/vbulletin_css/style00008l/editor_contents.css"> <style type="text/css" data-cke-temp="1"> form{border: 1px dotted #FF0000;padding: 2px;} img.cke_hidden{background-image: url(http://fairplay.garena.com/clientscript/ckeditor/plugins/forms/images/hiddenfield.gif?t=B37D54V);background-position: center center;background-repeat: no-repeat;border: 1px solid #a9a9a9;width: 16px !important;height: 16px !important;} img.cke_iframe{background-image: url(http://fairplay.garena.com/clientscript/ckeditor/plugins/iframe/images/placeholder.png?t=B37D54V);background-position: center center;background-repeat: no-repeat;border: 1px solid #a9a9a9;width: 80px;height: 80px;} img.cke_anchor{background-image: url(http://fairplay.garena.com/clientscript/ckeplugins/vblink/images/anchor.gif?t=B37D54V);background-position: center center;background-repeat: no-repeat;border: 1px solid #a9a9a9;width: 18px !important;height: 18px !important;} a.cke_anchor{background-image: url(http://fairplay.garena.com/clientscript/ckeplugins/vblink/images/anchor.gif?t=B37D54V);background-position: left center;background-repeat: no-repeat;border: 1px solid #a9a9a9;padding-left: 18px;} </style> </head> <body class="forum" spellcheck="true"> </body> </html> </iframe> </td> Image: http://s9.postimage.org/nwyvq3san/Screen_Shot038.jpg I can't find a way to get elements inside a iframe and without id. Can you help me?

    Read the article

  • Jquery: Incrimentation for each set of elements in more than 1 div

    - by Jack
    I'm making a Jquery slideshow. It lists thumbnails, that when clicked on, reveal the large image as an overlay. To match up the thumbs with the large images I'm adding attributes to each thumbnail and large image. The attributes contain a number which matches each thumb to its corresponding large image. It works when one slideshow is present on a page. But I want it to work if more than one slideshow is present. Here's the code for adding attributes to thumbs and large images: thumbNo = 0; largeNo = 0; $(this + '.slideshow_content .thumbs img').each(function() { thumbNo++; $(this).attr('thumbimage', thumbNo); $(this).attr("title", "Enter image gallery"); }); $(this + '.slideshow_content .image_container .holder img').each(function() { largeNo++; $(this).addClass('largeImage' + largeNo); }); This works. To make the incrementation work when there are two slideshows on a page, I thought I could stick this code in an each function... $('.slideshow').each(function() { thumbNo = 0; largeNo = 0; $(this + '.slideshow_content .thumbs img').each(function() { thumbNo++; $(this).attr('thumbimage', thumbNo); $(this).attr("title", "Enter image gallery"); }); $(this + '.slideshow_content .image_container .holder img').each(function() { largeNo++; $(this).addClass('largeImage' + largeNo); }); }); The problem with this is that the incrimenting operator does not reset for the second slideshow div (.slideshow), so I end up with thumbs in the first .slideshow div being numbered 1,2,3 etc.. and thumbs in the second .slideshow div being numbered 4,5,6 etc. How do I make the numbers in the second .slideshow div reset and start from 1 again? This is really hard to explain concisely. I hope someone gets the gist.

    Read the article

  • Custom CSS menu, "Active" tab remains on 'Home' not the actual page

    - by user1690254
    I created this custom css menu, but when switching tabs, the "Active" tab design remains on the 'Home' link on the menu, rather than the actual page I'm on. Any idea how I an fix this? Here's the code: .menu{margin:0 auto; padding:0; height:30px; width:100%; display:block; background:url('http://media.datahc.com/Affiliates/43817/Brands/Image/topmenuimages.png') repeat-x;} .menu li{padding:0; margin:0; list-style:none; display:inline;} .menu li a{float:left; padding-left:15px; display:block; color:rgb(255,255,255); text-decoration:none; font:12px Verdana, Arial, Helvetica, sans-serif; cursor:pointer; background:url('http://media.datahc.com/Affiliates/43817/Brands/Image/topmenuimages.png') 0px -30px no-repeat;} .menu li a span{line-height:30px; float:left; display:block; padding-right:15px; background:url('http://media.datahc.com/Affiliates/43817/Brands/Image/topmenuimages.png') 100% -30px no-repeat;} .menu li a:hover{background-position:0px -60px; color:rgb(255,255,255);} .menu li a:hover span{background-position:100% -60px;} .menu li a.active, .menu li a.active:hover{line-height:30px; font:12px Verdana, Arial, Helvetica, sans-serif; background:url('http://media.datahc.com/Affiliates/43817/Brands/Image/topmenuimages.png') 0px -90px no-repeat; color:rgb(255,255,255);} .menu li a.active span, .menu li a.active:hover span{background:url('http://media.datahc.com/Affiliates/43817/Brands/Image/topmenuimages.png') 100% -90px no-repeat;} <ul class="menu"> <li><a href="http://caribbeantl.com/"; class="active"><span>Home</span></a></li> <li><a href="http://caribbeantl.com/hotels/"><span>Testing post</span></a></li> </ul>

    Read the article

  • Is it thread safe to read a form controls value (but not change it) without using Invoke/BeginInvoke from another thread

    - by goku_da_master
    I know you can read a gui control from a worker thread without using Invoke/BeginInvoke because my app is doing it now. The cross thread exception error is not being thrown and my System.Timers.Timer thread is able to read gui control values just fine (unlike this guy: can a worker thread read a control in the GUI?) Question 1: Given the cardinal rule of threads, should I be using Invoke/BeginInvoke to read form control values? And does this make it more thread-safe? The background to this question stems from a problem my app is having. It seems to randomly corrupt form controls another thread is referencing. (see question 2) Question 2: I have a second thread that needs to update form control values so I Invoke/BeginInvoke to update those values. Well this same thread needs a reference to those controls so it can update them. It holds a list of these controls (say DataGridViewRow objects). Sometimes (not always), the DataGridViewRow reference gets "corrupt". What I mean by corrupt, is the reference is still valid, but some of the DataGridViewRow properties are null (ex: row.Cells). Is this caused by question 1 or can you give me any tips on why this might be happening? Here's some code (the last line has the problem): public partial class MyForm : Form { void Timer_Elapsed(object sender) { // we're on a new thread (this function gets called every few seconds) UpdateUiHelper updateUiHelper = new UpdateUiHelper(this); foreach (DataGridViewRow row in dataGridView1.Rows) { object[] values = GetValuesFromDb(); updateUiHelper.UpdateRowValues(row, values[0]); } // .. do other work here updateUiHelper.UpdateUi(); } } public class UpdateUiHelper { private readonly Form _form; private Dictionary<DataGridViewRow, object> _rows; private delegate void RowDelegate(DataGridViewRow row); private readonly object _lockObject = new object(); public UpdateUiHelper(Form form) { _form = form; _rows = new Dictionary<DataGridViewRow, object>(); } public void UpdateRowValues(DataGridViewRow row, object value) { if (_rows.ContainsKey(row)) _rows[row] = value; else { lock (_lockObject) { _rows.Add(row, value); } } } public void UpdateUi() { foreach (DataGridViewRow row in _rows.Keys) { SetRowValueThreadSafe(row); } } private void SetRowValueThreadSafe(DataGridViewRow row) { if (_form.InvokeRequired) { _form.Invoke(new RowDelegate(SetRowValueThreadSafe), new object[] { row }); return; } // now we're on the UI thread object newValue = _rows[row]; row.Cells[0].Value = newValue; // randomly errors here with NullReferenceException, but row is never null! }

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • SBS2003 to SBS2011 Migration - Installation Error

    - by Shawn Gradwell
    Microsoft Small Business Server 2003 to 2011 Migration. I followed the Migration Guide from Microsoft and the source server had no errors when running the various tests prior to the migration. I have completed the destination server setup using the Answer File and the server is up and running. It all looks good, I can access Exchange and AD and the only problem is the error message when you log in stating that the setup did not complete and to check the logs. Because all looks good I am continuing the migration to the destination server. I also have to state that this client does not use Sharepoint at all. Do I have to redo everything? Herewith the logs: [4992] 121016.225454.5905: Task: Starting Add User or Group access VSS registry. [4992] 121016.225454.7645: TaskManagement: In TaskScheduler.RunTasks(): The "ConfigureSharePointVSSRegistryTask" Task threw an Exception during the Run() call:System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) [4992] 121016.225454.7655: Setup: An error was encountered on the TME thread: System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) [4956] 121016.225455.0685: Setup: _UnhandledExceptionHandler: Setup encountered an error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Reflection.TargetInvocationException: The TME thread failed (see the inner exception). ---> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at System.Security.AccessControl.CommonObjectSecurity.ModifyAccess(AccessControlModification modification, AccessRule rule, Boolean& modified) at System.Security.AccessControl.CommonObjectSecurity.AddAccessRule(AccessRule rule) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.AddUsersToAccessRegistry(List`1 names) at Microsoft.WindowsServerSolutions.IWorker.Tasks.ConfigureSharePointVSSRegistryTask.Run(ITaskDataLink dl) at Microsoft.WindowsServerSolutions.TaskManagement.Data.Task.Run(ITaskDataLink dataLink) at Microsoft.WindowsServerSolutions.TaskManagement.TaskScheduler.RunTasks(String taskListId, String stateFileName) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter._RunTasks(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) --- End of inner exception stack trace --- at Microsoft.WindowsServerSolutions.Setup.SBSSetup.ProgressPagePresenter.TasksCompleted(Object sender, RunWorkerCompletedEventArgs e) --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(IRuntimeMethodInfo method, Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeType typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Delegate.DynamicInvokeImpl(Object[] args) at System.Windows.Forms.Control.InvokeMarshaledCallbackDo(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbackHelper(Object obj) at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Forms.Control.InvokeMarshaledCallback(ThreadMethodEntry tme) at System.Windows.Forms.Control.InvokeMarshaledCallbacks() at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.WindowsServerSolutions.Common.Wizards.Framework.WizardChainEngine.Launch() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass._LaunchWizard() at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.RealMain(String[] args) at Microsoft.WindowsServerSolutions.Setup.SBSSetup.MainClass.Main(String[] args) [4956] 121016.225455.0865: Setup: Removed the password. [4956] 121016.225455.0905: Setup: Deleting scheduled task at path Microsoft\Windows\Windows Small Business Server 2011 Standard with name Setup [4956] 121016.225455.8055: Setup: Removed SBSSetup from the RunOnce.

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • Rosewill RSV-S5 and it's transferespeeds

    - by DoomStone
    I have just bought a Rosewill RSV-S5, I have installed 5x1,5Tb Western Digital Green disks in it. After that have I created a Raid5 on them all with the software that followed with the hardware. Not the raid it self works fine, but it is SLOW, I can only obtain a maximum of 25 MB/s, and if SABnzbd+ is downloading with 5 MB/s is it having a hard time streaming a normal DIVX (700 mb) movie. Is this normal or is there something wrong? Edit: should be able to handle 3 Gbps = 384 megabytes / second Edit 2: As you can see am I only downloading with 3,76 MB/s and I'm trying to watch V s02e08 (720p), but it is completely unwatchable, as I can see 30 sec, and the it buffers for 20 sec. Edit: Other information there might be required I'm running Windows Server 2008 R2, optimized for program performance. Windows is installed on a 60GB SSD. I have a 50 Mb/s internet connection and a 1 Gb/s LAN, all connected with Cat6 Ethernet cables. The MCE is using a Gigabyte EP35C-DS3R motherboard with 2 GB DDR2 ram. Edit 3: I have used chunk sizes for 128 KB Edit 4: I found this on newegg Pros: Enclosure for 5x2TB hard drive is fine. This is basically a rebranded San Digital TR5M-B product. For support Rosewill tells you to contact San Digital. No direct support from Silicon Image for the computer raid card. Cons: Includes computer Silicon Image 3132 raid card, extremely slow raid 5 write (our tests ~10MB/s). Compare to regular internal local drive write 30-60MB/s. We basically dumped the Sil3132 card and replaced with High Point RocketRaid 622 card for extra $69.99. Note for RR622, turn off ECRC (end to end CRC check) for card to work on IBM xserver. What took 12hrs to copy now took 2-3hrs. San Digital realized the problem and has the newer model TR5M-BP TowerRaid Plus that comes with High Point RocketRaid 622 card. Rosewill should discontinue this product and go with TR5M-BP. Could not get Silicon Image raid management software to work with complicated 2008R2 server with 10 NICs, application doesn't know how to talk to localhost port with all those NICs. No updates from Silicon Image and support from San Digital ignored. Gave up on Sil3132 card. Save yourself from a lot of headaches, get the RR622 card too if you are going to buy this product. Other Thoughts: The newer model is TR5M-BP TowerRaid Plus, comes with High Point RocketRaid 622 raid card for the PC instead of Silicon Image Sil3132. According to San Digital, raid 5 performance for Sil3132 read 80MB/s write 19MB/s, and RR622 read 154MB/s write 149MB/s. Our RR622 tests gave (8TB raid 5) write ~80-110MB/s copying 40GB file took 8mins. So I have now ordered a HighPoint RocketRAID 622 2P ext SATA III and hopes that it will solve my problems.

    Read the article

  • AngularJS databinding

    - by user3652865
    How can I add multiple values to one object in an Array. I am having Environment and Cluster, I am able to assign multiple clusters to one environment. Now I want to add application name to this environment and cluster pair. I am having page called "Add Application". Here I am using select menu to for environment and Cluster. My first question is, when I select environment then want to show only those clusters which are assigned to that environment name. And assign application name to that pair. Also should be able to edit the Application field. I am using environmentServices and clusterServices to store updated data. link of JSFiddle: http://jsfiddle.net/avinashMaddy/J2KLK/5/ Please if someone can help me in this. Below is my code: <div class="maincontent" ng-controller="manageApplicationController"> <div class="article"> <form> <section> <!-- Environment --> <div class="col-md-4"> <label>Environment:</label> <select ng-model="newApp.selectedEnvironment" class="form-control" ng-options="environment.name for environment in environments"> <option value='' disabled style='display:none;'> Select Environment </option> </select> <span> <select ng-switch-when="true" disabled ng-model="newApp.selectedEnvironment" class="form-control" ng-options="environment.name for environment in environments"> <option value='' disabled style='display:none;'> Select Environment </option> </select> </span> </div> <!-- Cluster --> <div class="col-md-4"> <label>Cluster:</label> <span ng-switch on="newApp.showCancel"> <select ng-switch-default ng-model="newApp.selectedCluster" class="form-control" ng-options="cluster for cluster in clusters"> <option value='' disabled style='display:none;'> Select Environment </option> </select> <select ng-switch-when="true" disabled ng-model="newApp.selectedCluster" class="form-control" ng-options="cluster for cluster in clusters"> <option value='' disabled style='display:none;'> Select Environment </option> </select> </span> </div> <!-- Application Name --> <div class="col-md-4"> <label>Application Name:</label> <input type="text" class="form-control" name="applicationName" placeholder="Application" ng-model="app.name" required> <br/> <input type="hidden" ng-model="app.id" /> </div> </section> <!-- submit button --> <section class="col-md-12"> <button type="button" class="btn btn-default pull-right" ng-click="saveNewApplicatons()">Save</button> </section> </form> </div> <!-- table --> <div class="article"> <table class="table table-bordered table-striped"> <tr> <th colspan="6"> <div class="pull-left">Cluster Info</div> </th> </tr> <tr> <th>Environment</th> <th>Cluster</th> <th>Application</th> <th>Edit</th> <th>Header Ifo</th> </tr> <tr ng-repeat="app in applications"> <td>{{app.environment}}</td> <td>{{app.cluster}}</td> <td>{{app.name}}</td> <td> <a href="" ng-click="edit(app.id)" title="Edit">edit</span></a> | <a href="" ng-click="remove(app.id)" title="Delete">delete</a> </td> <td> <!-- Add template --> <script type="text/ng-template" id="addHederInfo.html"> <div class="modal-header"> <h3>Add Header Info</h3> </div> <div class="modal-body"> <input type="text" class="form-control" name="eName" placeholder="Header Key" ng-model="$parent.header.key"> <br/> <input type="text" class="form-control" name="eName" placeholder="Header Value" ng-model="$parent.header.value"> <br /> <input type="hidden" ng-model="header.id" /> <section> <div class="pull-right"> <button class="btn btn-primary" ng-click="saveHeader()">Add</button> <button class="btn btn-warning" ng-click="cancel()">Close</button> </div> </section> </div> <div class="modal-footer"> <h3>Existing Header Info for </h3> <table class="table table-bordered table-striped"> <tr> <th>Header Key</th> <th>Header Vlaue</th> </tr> <tr ng-repeat="header in headers"> <td>{{header.key}}</td> <td>{{header.value}}</td> </tr> </table> </div> </script> <!-- /Add template --> <script type="text/ng-template" id="editHederInfo.html"> <div class="modal-header"> <h3>Edit Header Info</h3> </div> <div class="modal-body"> <input type="text" class="form-control" name="eName" placeholder="Header Key" ng-model="$parent.header.key"> <br/> <input type="text" class="form-control" name="eName" placeholder="Header Value" ng-model="$parent.header.value"> <br /> <input type="hidden" ng-model="header.id" /> <section> <div class="pull-right"> <button class="btn btn-primary" ng-click="saveHeader()">Update</button> <button class="btn btn-warning" ng-click="cancel()">Close</button> </div> </section> </div> <div class="modal-footer"> <h3>Existing Header Info for</h3> <table class="table table-bordered table-striped"> <tr> <th>Header Key</th> <th>Header Vlaue</th> <th>Edit</th> </tr> <tr ng-repeat="header in headers"> <td>{{header.key}}</td> <td>{{header.value}}</td> <td> <a href="" ng-click="editHeader(header.id)" title="Edit"><span class="glyphicon glyphicon-edit" ></span></a> | <a href="" ng-click="removeHeader(header.id)" title="Edit"><span class="glyphicon glyphicon-trash"></span></a> </td> </tr> </table> </div> </script> <!-- Add template --> <!-- /Add template --> <a href="" ng-click="addInfo()">Add</a> | <a href="" ng-click="editInfo()">Edit</a> </td> </tr> </table> </div> </div> Controller.js: var apsApp = angular.module('apsApp', []); apsApp.service('clusterService', function(){ var clusters=[]; //simply returns the environment list this.list = function () { return clusters; }; }); apsApp.service('environmentService', function(){ var environments=[ {name :'DEV',}, {name:'PROD',}, {name:'QA',}, {name:'Linux_Dev',} ]; //simply returns the environment list this.list = function () { return environments; }; apsApp.controller('manageApplicationController', function ($scope, environmentService, clusterService) { var uid = 0; $scope.environments= environmentService.list(); $scope.clusters= clusterService.list(); $scope.newApp = {}; $scope.newApp.selectedEnvironment = $scope.environments[0]; $scope.newApp.selectedCluster = $scope.clusters[0]; $scope.newApp.buttonLabel = 'Save'; $scope.newApp.showCancel = false; /*$scope.applications=[ {'name': 'Enterprice App Store' }, {'name': 'UsageGateway'}, {'name': 'Click 2 Fill'}, {'name': 'ATT SmartWiFi'} ];*/ //add new application $scope.saveNewApplicatons = function() { if ($scope.select.id == undefined) { //if this is new application, add it in applications array $scope.clusters.push({ id: uid++, cluster: $scope.newApp.cluster, environment: $scope.newApp.selectedEnvironment }); } else { $scope.clusters[$scope.select.id].cluster = $scope.select.cluster; $scope.newApp.id = undefined; $scope.newApp.buttonLabel = 'Save Cluster'; $scope.newApp.showCancel = false; }; //clear the add appplicaitons form $scope.newApp.selectedEnvironment = $scope.environments[0]; }; //delete application $scope.remove = function (id) { //search app with given id and delete it for (i in $scope.clusters) { if ($scope.clusters[i].id == id) { confirm("This Cluster will get deleted permanently"); $scope.clusters.splice(i, 1); $scope.clust = {}; } } }; $scope.cancelUpdate = function () { $scope.newApp.buttonLabel = 'Save Cluster'; $scope.newApp.showCancel = false; $scope.newApp.id = undefined; $scope.newApp.cluster = ""; $scope.newApp.selectedEnvironment = $scope.environments[0]; }; });

    Read the article

  • IIS7 Failure after installing Advanced Logging

    - by Guy Harwood
    I came across a nasty issue when i installed the Advanced Logging feature for IIS7 via the Web Platform Installer on my Windows 2008 Server.  Basically, after installation and reboot none of my sites were working and returned 503 – Internal Server Error. Snooping around in the Event Viewer i found the following error reported by the W3SVC… The Module DLL C:\Program Files\IIS\Advanced Logging\AdvancedLoggingModule.dll failed to load. The data is the error Even though the DLLs are there, it is not picking them up. I managed to find a fix via google that involves editing the configapplicationHost.config file in the C:\Windows\System32\inetsrv\ directory. 1.  Copy AdvancedLoggingModule.dll and ClientLoggingHandler.dll to %windir%\system32 (C:\windows\system32  on a default setup) 2.  Locate the file C:\Windows\System32\inetsrv\configapplicationHost.config and make a backup, then open it in a text editor (i recommend Notepad++). 3.  Search for the following 2 lines (mine are located on line 570).. <add name="ClientLoggingHandler" image="%ProgramFiles%\IIS\Advanced Logging\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%ProgramFiles%\IIS\Advanced Logging\AdvancedLoggingModule.dll" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } and alter them to…. <add name="ClientLoggingHandler" image="%windir%\system32\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%windir%\system32\AdvancedLoggingModule.dll" /> 4. Open a command prompt and run iisReset. 5. All sites should now be working. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • June 26th Links: ASP.NET, ASP.NET MVC, .NET and NuGet

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my Best of 2010 Summary for links to 100+ other posts I’ve done in the last year. [I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET Introducing new ASP.NET Universal Providers: Great post from Scott Hanselman on the new System.Web.Providers we are working on.  This release delivers new ASP.NET Membership, Role Management, Session, Profile providers that work with SQL Server, SQL CE and SQL Azure. CSS Sprites and the ASP.NET Sprite and Image Optimization Library: Great post from Scott Mitchell that talks about a free library for ASP.NET that you can use to optimize your CSS and images to reduce HTTP requests and speed up your site. Better HTML5 Support for the VS 2010 Editor: Another great post from Scott Hanselman on an update several people on my team did that enables richer HTML5 editing support within Visual Studio 2010. Install the Ajax Control Toolkit from NuGet: Nice post by Stephen Walther on how you can now use NuGet to install the Ajax Control Toolkit within your applications.  This makes it much easier to reference and use. May 2011 Release of the Ajax Control Toolkit: Another great post from Stephen Walther that talks about the May release of the Ajax Control Toolkit. It includes a bunch of nice enhancements and fixes. SassAndCoffee 0.9 Released: Paul Betts blogs about the latest release of his SassAndCoffee extension (available via NuGet). It enables you to easily use Sass and Coffeescript within your ASP.NET applications (both MVC and Webforms). ASP.NET MVC ASP.NET MVC Mini-Profiler: The folks at StackOverflow.com (a great site built with ASP.NET MVC) have released a nice (free) profiler they’ve built that enables you to easily profile your ASP.NET MVC 3 sites and tune them for performance.  Globalization, Internationalization and Localization in ASP.NET MVC 3: Great post from Scott Hanselman on how to enable internationalization, globalization and localization support within your ASP.NET MVC 3 and jQuery solutions. Precompile your MVC Razor Views: Great post from David Ebbo that discusses a new Razor Generator tool that enables you to pre-compile your razor view templates as assemblies – which enables a bunch of cool scenarios. Unit Testing Razor Views: Nice post from David Ebbo that shows how to use his new Razor Generator to enable unit testing of razor view templates with ASP.NET MVC. Bin Deploying ASP.NET MVC 3: Nice post by Phil Haack that covers a cool feature added to VS 2010 SP1 that makes it really easy to \bin deploy ASP.NET MVC and Razor within your application. This enables you to easily deploy the app to servers that don’t have ASP.NET MVC 3 installed. .NET Table Splitting with EF 4.1 Code First: Great post from Morteza Manavi that discusses how to split up a single database table across multiple EF entity classes.  This shows off some of the power behind EF 4.1 and is very useful when working with legacy database schemas. Choosing the Right Collection Class: Nice post from James Michael Hare that talks about the different collection class options available within .NET.  A nice overview for people who haven’t looked at all of the support now built into the framework. Little Wonders: Empty(), DefaultIfEmpty() and Count() helper methods: Another in James Michael Hare’s excellent series on .NET/C# “Little Wonders”.  This post covers some of the great helper methods now built-into .NET that make coding even easier. NuGet NuGet 1.4 Released: Learn all about the latest release of NuGet – which includes a bunch of cool new capabilities.  It takes only seconds to update to it – go for it! NuGet in Depth: Nice presentation from Scott Hanselman all about NuGet and some of the investments we are making to enable a better open source ecosystem within .NET. NuGet for the Enterprise – NuGet in a Continuous Integration Automated Build System: Great post from Scott Hanselman on how to integrate NuGet within enterprise build environments and enable it with CI solutions. Hope this helps, Scott

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Updated Security Baseline (7u45) impacts Java 7u40 and before with High Security settings

    - by costlow
    The Java Security Baseline has been increased from 7u25 to 7u45.  For versions of Java below 7u45, this means unsigned Java applets or Java applets that depend on Javascript LiveConnect calls will be blocked when using the High Security setting in the Java Control Panel. This issue only affects Applets and Web Start applications. It does not affect other types of Java applications. The Short Answer Users upgrading to Java 7 update 45 will automatically fix this and is strongly recommended. The More Detailed Answer There are two items involved as described on the deployment flowchart: The Security Baseline – a dynamically updated attribute that checks to see which Java version contains the most recent security patches. The Security Slider – the user-controlled setting of when to prompt/run/block applets. The Security Baseline Java clients periodically check in to understand what version contains the most recent security patches. Versions are released in-between that contain bug fixes. For example: 7u25 (July 2013) was the previous secure baseline. 7u40 contained bug fixes. Because this did not contain security patches, users were not required to upgrade and were welcome to remain on 7u25. When 7u45 was released (October, 2013), this critical patch update contained security patches and raised the secure baseline. Users are required to upgrade from earlier versions. For users that are not regularly connected to the internet, there is a built in Expiration Date. Because of the pre-established quarterly critical patch updates, we are able to determine an approximate date of the next version. A critical patch released in July will have its successor released, at latest, in July + 3 months: October. The Security Slider The security slider is located within the Java control panel and determines which Applets & Web Start applications will prompt, which will run, and which will be blocked. One of the questions used to determine prompt/run/block is, “At or Above the Security Baseline.” The Combination JavaScript calls made from LiveConnect do not reside within signed JAR files, so they are considered to be unsigned code. This is correct within networked systems even if the domain uses HTTPS because signed JAR files represent signed "data at rest" whereas TLS (often called SSL) literally stands for "Transport Level Security" and secures the communication channel, not the contents/code within the channel. The resulting flow of users who click "update later" is: Is the browser plug-in registered and allowed to run? Yes. Does a rule exist for this RIA? No rules apply. Does the RIA have a valid signature? Yes and not revoked. Which security prompt is needed? JRE is below the baseline. This is because 7u45 is the baseline and the user, clicked "upgrade later." Under the default High setting, Unsigned code is set to "Don’t Run" so users see: Additional Notes End Users can control their own security slider within the control panel. System Administrators can customize the security slider during automated installations. As a reminder, in the future, Java 7u51 (January 2014) will block unsigned and self-signed Applets & Web Start applications by default.

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >