Search Results

Search found 25885 results on 1036 pages for 'claims based identity'.

Page 147/1036 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • How can I structure my MustacheJS template to add dynamic classes based on the values from a JSON file?

    - by JGallardo
    OBJECTIVE To build an app that allows the user to search for locations. CURRENT STATE At the moment the locations listed are few, so they are just all presented when landing on the "dealers" page. BACKGROUND Previously there were only about 50 showrooms carrying a product we sell, so a static HTML page was fine. And displays as But the page size grew to about 1500 lines of code after doing this. We have gotten more and need a scalable solution so that we can add many more dealers fast. In other projects, I have previously used MustacheJS and to load in values from a JSON file. I know the ideally this will be an AJAX application. Perhaps I might be better off with database here? Below is what I have in mind so far, and it "works" up to a certain point, but seems not to be anywhere near the most sustainable solution that can be efficiently scaled. HTML <a id="{{state}}"></a> <div> <h4>{{dealer}} : {{city}}, {{state}} {{l_type}}</h4> <div class="{{icon_class}}"> <ul> <li><i class="icon-map-marker"></i></li> <li><i class="icon-phone"></i></li> <li><i class="icon-print"></i></li> </ul> </div> <div class="listingInfo"> <p>{{street}} <br>{{suite}}<br> {{city}}, {{state}} {{zip}}<br> Phone: {{phone}}<br> {{toll_free}}<br> {{fax}} </p> </div> </div> <hr> JSON { "dealers" : [ { "dealer":"Benco Dental", "City":"Denver", "state":"CO", "zip":"80112", "l_type":"Showroom", "icon_class":"listingIcons_3la", "phone":"(303) 790-1421", "toll_free":null, "fax":"(303) 790-1421" }, { "dealer":"Burkhardt Dental Supply", "City":"Portland", "state":"OR", "zip":"97220", "l_type":"Showroom", "icon_class":"listingIcons", "phone":" (503) 252-9777", "toll_free":"(800) 367-3030", "fax":"(866) 408-3488" } ]} CHALLENGES The CSS class wrapping the ul will vary based on how many fields there are. In this case there are 3, so the class is "listingIcons_3la" The "toll free" number section should only show up if in fact, there is a toll free number. the fax number should only show up if there is a value for a fax number.

    Read the article

  • How do you select form elements in JQuery based upon an html table?

    - by Swoop
    I am working on some ASP.NET web forms which involves some dynamic generation, and I need to add some onClick helpers on the client side. I have a basic outline of something working, except for one huge problem. There are multiple HTML tables, each generated by a different ASP.NET web control. Each table can contain overlapping field names, which is causing a problem with my JQuery click event handlers. The click event handler is linking to unintended form fields in addition to the intended form field. I have provided a simplified sample version of the code below. This code is trying to set the value of textbox box1 when a particular radiobutton is selected in the table with id=thing1. Obviously, the jquery code will be triggered for the form fields in both tables. The tables are dynamically added to the webpage based upon different conditions. It is possible that no tables will be loaded, only 1 table, or both tables might load. In the future, other tables could be added. Each table comes from a different .net web control. Other than renaming the form fields to make sure they are unique across all user controls, is there a way to have JQuery act only on the intended form fields? In other words, could the table ID be incorporated into the JQuery code in a manner that does not become a nightmare to maintain later? <script> $(document).ready(function() { $("[id$=radio1_0]").click(function() { $("[id$=box1]").attr("value", ""); }); $("[id$=radio1_1]").click(function() { $("[id$=box1]").attr("value", "N/A"); }); </script> <table id="thing1"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </td></tr> </table> <table id="thing2"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </tr></td> </table>

    Read the article

  • PHP Based session variable not retaining value. Works on localhost, but not on server.

    - by Foo
    I've been trying to debug this problem for many hours, but to no avail. I've been using PHP for many years, and got back into it after long hiatus, so I'm still a bit rusty. Anyways, my $_SESSION vars are not retaining their value for some reason that I can't figure out. The site worked on localhost perfectly, but uploading it to the server seemed to break it. First thing I checked was the PHP.ini server settings. Everything seems fine. In fact, my login system is session based and it works perfectly. So now that I know $_SESSIONS are working properly and retaining their value for my login, I'm presuming the server is setup and the problem is in my script. Here's a stripped version of the code that's causing a problem. $type, $order and $style are not being retained after they are set via a GET variable. The user clicks a link, which sets a variable via GET, and this variable is retained for the remainder of their session. Is there some problem with my logic that I'm not seeing? <?php require_once('includes/top.php'); //first line includes a call to session_start(); require_once('includes/db.php'); $type = filter_input(INPUT_GET, 't', FILTER_VALIDATE_INT); $order = filter_input(INPUT_GET, 'o', FILTER_VALIDATE_INT); $style = filter_input(INPUT_GET, 's', FILTER_VALIDATE_INT); /* According to documentation, filter_input returns a NULL when variables are undefined. So, if t, o, or s are not set via URL, $type, $order and $style will be NULL. */ print_r($_SESSION); /* All other sessions, such as the login session, etc. are displayed here. After the sessions are set below, they are displayed up here to... simply with no value. This leads me to believe the problem is with the code below, perhaps? */ // If $type is not null (meaning it WAS set via the get method above) // or it's false because the validation failed for some reason, // then set the session to the $type. I removed the false check for simplicity. // This code is being successfully executed, and the data is being stored... if(!is_null($type)) { $_SESSION['type'] = $type; } if(!is_null($order)) { $_SESSION['order'] = $order; } if(!is_null($style)) { $_SESSION['style'] = $style; } $smarty->display($template); ?> If anyone can point me in the right direction, I'd greatly appreciate it. Thanks.

    Read the article

  • How do you hide an image tag based on an ajax response?

    - by Chris
    What is the correct jquery statement to replace the "//Needed magic" comments below so that the image tags are hidden or unhidden based on the AJAX responses? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>JQuery</title> <style type="text/css"> .isSolvedImage{ width: 68px; height: 47px; border: 1px solid red; cursor: pointer; } </style> <script src="_js/jquery-1.4.2.min.js" type="text/javascript"> </script> </head> <body> <div id='true1.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <div id='false1.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <div id='true2.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <div id='false2.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <script type="text/javascript"> $(function(){ var getDivs = 0; //iterate div with class isSolvedImage $("div.isSolvedImage").each(function() { alert('div id--'+this.id); // send ajax requrest $.get(this.id, function(data) { // check if ajax response is 1 alert('div id--'+this.url+'--ajax response--'+data); if(data == 1){ alert('div id--'+this.url+'--Unhiding image--'); //Needed magic //Show image if data==1 } else{ alert('div id--'+this.url+'--Hiding image--'); //Needed magic //Hide image if data!=1 } }); }); }); </script> </body> </html>

    Read the article

  • How do I select all parenting items based on a given node's attribute in php's xPath?

    - by bakkelun
    I have an XML feed that looks something like this (excerpt): <channel> <title>Channel Name</title> <link>Link to the channel</link> <item> <title>Heading 1</title> <link>http://www.somelink.com?id=100</link> <description><![CDATA[ Text here ]]></description> <publishDate>Fri, 03 Apr 2009 10:00:00</publishDate> <guid>http://www.somelink.com/read-story-100</guid> <category domain="http://www.somelink.com/?category=4">Category 1</category> </item> <item> <title>Heading 2</title> <link>http://www.somelink.com?id=110</link> <description><![CDATA[ Text here ]]></description> <publishDate>Fri, 03 Apr 2009 11:00:00</publishDate> <guid>http://www.somelink.com/read-story-110</guid> <category domain="http://www.somelink.com/?category=4">Category 1</category> </item> <channel> That's the rough of it. I'm using this piece of PHP (excerpt): $xml = simple_xml_load_file($xmlFile); $xml->xpath($pattern); Now I want to get all ITEM-nodes (with their children) based on that pesky "domain" attribute in the category node, but no matter what I try it does-not-work. The closest I got was "//category[@domain= 'http://www.somelink.com/?category=4']" The expression I tried gave me this result: [0] => SimpleXMLElement Object ( [@attributes] => Array ( [domain] => http://www.somelink.com/?category=4 ) [0] => Category 1 [1] => SimpleXMLElement Object ( [@attributes] => Array ( [domain] => http://www.somelink.com/?category=4 ) [0] => Category 1 The expression should contain all childrens of the two items in the example, but as you can see only the info in the category node is present, I want all the item nodes. Any help would be highly appreciated.

    Read the article

  • Play a sound based on the button pressed on the IPhone/xcode.

    - by slickplaid
    I'm trying to play a sound based on which button is pressed using AVAudioPlayer. (This is not a soundboard or fart app.) I have linked all buttons using this code in the header file: @interface appViewController : UIViewController <AVAudioPlayerDelegate> { AVAudioPlayer *player; UIButton *C4; UIButton *Bb4; UIButton *B4; UIButton *A4; UIButton *Ab4; UIButton *As4; UIButton *G3; UIButton *Gb3; UIButton *Gs3; UIButton *F3; UIButton *Fs3; UIButton *E3; UIButton *Eb3; UIButton *D3; UIButton *Db3; UIButton *Ds3; UIButton *C3; UIButton *Cs3; } @property (nonatomic, retain) AVAudioPlayer *player; @property (nonatomic, retain) IBOutlet UIButton *C4; @property (nonatomic, retain) IBOutlet UIButton *B4; @property (nonatomic, retain) IBOutlet UIButton *Bb4; @property (nonatomic, retain) IBOutlet UIButton *A4; @property (nonatomic, retain) IBOutlet UIButton *Ab4; @property (nonatomic, retain) IBOutlet UIButton *As4; @property (nonatomic, retain) IBOutlet UIButton *G3; @property (nonatomic, retain) IBOutlet UIButton *Gb3; @property (nonatomic, retain) IBOutlet UIButton *Gs3; @property (nonatomic, retain) IBOutlet UIButton *F3; @property (nonatomic, retain) IBOutlet UIButton *Fs3; @property (nonatomic, retain) IBOutlet UIButton *E3; @property (nonatomic, retain) IBOutlet UIButton *Eb3; @property (nonatomic, retain) IBOutlet UIButton *D3; @property (nonatomic, retain) IBOutlet UIButton *Db3; @property (nonatomic, retain) IBOutlet UIButton *Ds3; @property (nonatomic, retain) IBOutlet UIButton *C3; @property (nonatomic, retain) IBOutlet UIButton *Cs3; - (IBAction) playNote; @end Buttons are all linked to the event "playNote" in interfaceBuilder and each note is linked to the proper referencing outlet according to note name. All *.mp3 sound files are named after the UIButton name (IE- C3 == C3.mp3). In my implementation file, I have this to play a only one note when the C3 button is pressed: #import "sonicfitViewController.h" @implementation appViewController @synthesize C3, Cs3, D3, Ds3, Db3, E3, Eb3, F3, Fs3, G3, Gs3, A4, Ab4, As4, B4, Bb4, C4; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { NSString *path = [[NSBundle mainBundle] pathForResource:@"3C" ofType:@"mp3"]; NSLog(@"path: %@", path); NSURL *file = [[NSURL alloc] initFileURLWithPath:path]; AVAudioPlayer *p = [[AVAudioPlayer alloc] initWithContentsOfURL:file error:nil]; [file release]; self.player = p; [p release]; [player prepareToPlay]; [player setDelegate:self]; [super viewDidLoad]; } - (IBAction) playNote { [self.player play]; } Now, with the above I have two issues: First, the NSLog reports NULL and crashes when trying to play the file. I have added the mp3's to the resources folder and they have been copied and not just linked. They are not in an subfolder under the resources folder. Secondly, how can I set it up so that when say button C3 is pressed, it plays C3.mp3 and F3 plays F3.mp3 without writing duplicate lines of code for each different button? playNote should be like NSString *path = [[NSBundle mainBundle] pathForResource:nameOfButton ofType:@"mp3"]; instead of defining it specifically (@"C3"). Is there a better way of doing this and why does the *path report NULL and crash when I load the app? I'm pretty sure it's something as simple as adding additional variable inputs to - (IBAction) playNote:buttonName and putting all the code to call AVAudioPlayer in the playNote function but I'm unsure of the code to do this.

    Read the article

  • Policy-based template design: How to access certain policies of the class?

    - by dehmann
    I have a class that uses several policies that are templated. It is called Dish in the following example. I store many of these Dishes in a vector (using a pointer to simple base class), but then I'd like to extract and use them. But I don't know their exact types. Here is the code; it's a bit long, but really simple: #include <iostream> #include <vector> struct DishBase { int id; DishBase(int i) : id(i) {} }; std::ostream& operator<<(std::ostream& out, const DishBase& d) { out << d.id; return out; } // Policy-based class: template<class Appetizer, class Main, class Dessert> class Dish : public DishBase { Appetizer appetizer_; Main main_; Dessert dessert_; public: Dish(int id) : DishBase(id) {} const Appetizer& get_appetizer() { return appetizer_; } const Main& get_main() { return main_; } const Dessert& get_dessert() { return dessert_; } }; struct Storage { typedef DishBase* value_type; typedef std::vector<value_type> Container; typedef Container::const_iterator const_iterator; Container container; Storage() { container.push_back(new Dish<int,double,float>(0)); container.push_back(new Dish<double,int,double>(1)); container.push_back(new Dish<int,int,int>(2)); } ~Storage() { // delete objects } const_iterator begin() { return container.begin(); } const_iterator end() { return container.end(); } }; int main() { Storage s; for(Storage::const_iterator it = s.begin(); it != s.end(); ++it){ std::cout << **it << std::endl; std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? } return 0; } The tricky part is here, in the main() function: std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? How can I access the dessert? I don't even know the Dessert type (it is templated), let alone the complete type of the object that I'm getting from the storage. This is just a toy example, but I think my code reduces to this. I'd just like to pass those Dish classes around, and different parts of the code will access different parts of it (in the example: its appetizer, main dish, or dessert).

    Read the article

  • How to get updated automatically WPF TreeViewItems with values based on .Net class properties?

    - by ProgrammerManiac
    Good morning. I have a class with data derived from InotifyPropertyChange. The data come from a background thread, which searches for files with certain extension in certain locations. Public property of the class reacts to an event OnPropertyChange by updating data in a separate thread. Besides, there are described in XAML TreeView, based on HierarhicalDataTemplates. Each TextBlock inside templates supplied ItemsSource = "{Binding FoundFilePaths, Mode = OneWay, NotifyOnTargetUpdated = True}". enter code here <HierarchicalDataTemplate DataType = "{x:Type lightvedo:FilesInfoStore}" ItemsSource="{Binding FoundFilePaths, Mode=OneWay, NotifyOnTargetUpdated=True}"> <!--????? ??????????? ???? ??????--> <StackPanel x:Name ="TreeNodeStackPanel" Orientation="Horizontal"> <TextBlock Margin="5,5,5,5" TargetUpdated="TextBlockExtensions_TargetUpdated"> <TextBlock.Text> <MultiBinding StringFormat="Files with Extension {0}"> <Binding Path="FileExtension"/> </MultiBinding> </TextBlock.Text> </TextBlock> <Button x:Name="OpenFolderForThisFiles" Click="OnOpenFolderForThisFiles_Click" Margin="5, 3, 5, 3" Width="22" Background="Transparent" BorderBrush="Transparent" BorderThickness="0.5"> <Image Source="images\Folder.png" Height="16" Width="20" > </Image> </Button> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type lightvedo:FilePathsStore}"> <TextBlock Text="{Binding FilePaths, Mode=OneWay, NotifyOnTargetUpdated=True}" TargetUpdated="OnTreeViewNodeChildren_Update" /> </HierarchicalDataTemplate> </TreeView.Resources> <TreeView.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform AngleX="-0.083"/> <RotateTransform/> <TranslateTransform X="-0.249"/> </TransformGroup> </TreeView.RenderTransform> <TreeView.BorderBrush> <LinearGradientBrush EndPoint="1,0.5" StartPoint="0,0.5"> <GradientStop Color="#FF74591F" Offset="0" /> <GradientStop Color="#FF9F7721" Offset="1" /> <GradientStop Color="#FFD9B972" Offset="0.49" /> </LinearGradientBrush> </TreeView.BorderBrush> </TreeView> Q: Why is the data from a class derived from INotifyPropertyChange does not affect the display of tree items. Do I understand: The interface will make INotifyPropertyChange be automatically redrawn TreeViewItems or do I need to manually carry out this operation? Currently TreeViewItems not updated and PropertyChamged always null. A feeling that no subscribers to the event OnPropertyChanged.

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • How do I make a simple image-based button with visual states in Silverlight 3?

    - by Jacob
    At my previous company, we created our RIAs using Flex with graphical assets created in Flash. In Flash, you could simply lay out your graphics for different states, i.e. rollover, disabled. Now, I'm working on a Silverlight 3 project. I've been given a bunch of images that need to serve as the graphics for buttons that have a rollover, pressed, and normal state. I cannot figure out how to simply create buttons with different images for different visual states in Visual Studio 2008 or Expression Blend 3. Here's where I am currently. My button is defined like this in the XAML: <Button Style="{StaticResource MyButton}"/> The MyButton style appears as follows: <Style x:Key="MyButton" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Image Source="/Assets/Graphics/mybtn_up.png" Width="54" Height="24"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"/> <VisualState x:Name="Pressed"/> <VisualState x:Name="Disabled"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </ControlTemplate> </Setter.Value> </Setter> </Style> I cannot figure out how to assign a different template to different states, nor how to change the image's source based on which state I'm in. How do I do this? Also, if you know of any good documentation that describes how styles work in Silverlight, that would be great. All of the search results I can come up with are frustratingly unhelpful. Edit: I found a way to change the image via storyboards like this: <Style x:Key="MyButton" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Image Source="/Assets/Graphics/mybtn_up.png" Width="54" Height="24" x:Name="Image"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"> <Storyboard Storyboard.TargetName="Image" Storyboard.TargetProperty="Source"> <ObjectAnimationUsingKeyFrames> <DiscreteObjectKeyFrame KeyTime="0" Value="/Assets/Graphics/mybtn_over.png"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Pressed"> <Storyboard Storyboard.TargetName="Image" Storyboard.TargetProperty="Source"> <ObjectAnimationUsingKeyFrames> <DiscreteObjectKeyFrame KeyTime="0" Value="/Assets/Graphics/mybtn_active.png"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Disabled"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </ControlTemplate> </Setter.Value> </Setter> </Style> However, this seems like a strange way of doing things to me. Is there a more standard way of accomplishing this?

    Read the article

  • ActAs and OnBehalfOf support in WIF

    - by cibrax
    I discussed a time ago how WIF supported a new WS-Trust 1.4 element, “ActAs”, and how that element could be used for authentication delegation.  The thing is that there is another feature in WS-Trust 1.4 that also becomes handy for this kind of scenario, and I did not mention in that last post, “OnBehalfOf”. Shiung Yong wrote an excellent summary about the difference of these two new features in this forum thread. He basically commented the following, “An ActAs RST element indicates that the requestor wants a token that contains claims about two distinct entities: the requestor, and an external entity represented by the token in the ActAs element. An OnBehalfOf RST element indicates that the requestor wants a token that contains claims only about one entity: the external entity represented by the token in the OnBehalfOf element. In short, ActAs feature is typically used in scenarios that require composite delegation, where the final recipient of the issued token can inspect the entire delegation chain and see not just the client, but all intermediaries to perform access control, auditing and other related activities based on the whole identity delegation chain. The ActAs feature is commonly used in multi-tiered systems to authenticate and pass information about identities between the tiers without having to pass this information at the application/business logic layer. OnBehalfOf feature is used in scenarios where only the identity of the original client is important and is effectively the same as identity impersonation feature available in the Windows OS today. When the OnBehalfOf is used the final recipient of the issued token can only see claims about the original client, and the information about intermediaries is not preserved. One common pattern where OnBehalfOf feature is used is the proxy pattern where the client cannot access the STS directly but is instead communicating through a proxy gateway. The proxy gateway authenticates the caller and puts information about him into the OnBehalfOf element of the RST message that it then sends to the real STS for processing. The resulting token is going to contain only claims related to the client of the proxy, making the proxy completely transparent and not visible to the receiver of the issued token.” Going back to WIF, “ActAs” and “OnBehalfOf” are both supported as extensions methods in the WCF client channel. public static class ChannelFactoryOperations {   public static T CreateChannelActingAs<T>(this ChannelFactory<T> factory,     SecurityToken actAs);     public static T CreateChannelOnBehalfOf<T>(this ChannelFactory<T> factory,     SecurityToken onBehalfOf); } Both methods receive the security token with the identity of the original caller.

    Read the article

  • SSH error: Permission denied, please try again

    - by Kamal
    I am new to ubuntu. Hence please forgive me if the question is too simple. I have a ubuntu server setup using amazon ec2 instance. I need to connect my desktop (which is also a ubuntu machine) to the ubuntu server using SSH. I have installed open-ssh in ubuntu server. I need all systems of my network to connect the ubuntu server using SSH (no need to connect through pem or pub keys). Hence opened SSH port 22 for my static IP in security groups (AWS). My SSHD-CONFIG file is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Through webmin (Command shell), I have created a new user named 'senthil' and added this new user to 'sudo' group. sudo adduser -y senthil sudo adduser senthil sudo I tried to login using this new user 'senthil' in 'webmin'. I was able to login successfully. When I tried to connect ubuntu server from my terminal through SSH, ssh senthil@SERVER_IP It asked me to enter password. After the password entry, it displayed: Permission denied, please try again. On some research I realized that, I need to monitor my server's auth log for this. I got the following error in my auth log (/var/log/auth.log) Jul 2 09:38:07 ip-192-xx-xx-xxx sshd[3037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=MY_CLIENT_IP user=senthil Jul 2 09:38:09 ip-192-xx-xx-xxx sshd[3037]: Failed password for senthil from MY_CLIENT_IP port 39116 ssh2 When I tried to debug using: ssh -v senthil@SERVER_IP OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to SERVER_IP [SERVER_IP] port 22. debug1: Connection established. debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA {SERVER_HOST_KEY} debug1: Host 'SERVER_IP' is known and matches the ECDSA host key. debug1: Found key in {MY-WORKSPACE}/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: password debug1: Next authentication method: password senthil@SERVER_IP's password: debug1: Authentications that can continue: password Permission denied, please try again. senthil@SERVER_IP's password: For password, I have entered the same value which I normally use for 'ubuntu' user. Can anyone please guide me where the issue is and suggest some solution for this issue?

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 6: Chaining multiple Token Services

    - by Your DisplayName here!
    See the previous posts first. So far we looked at the (simpler) scenario where a client acquires a token from an identity provider and uses that for authentication against a relying party WCF service. Another common scenario is, that the client first requests a token from an identity provider, and then uses this token to request a new token from a Resource STS or a partner’s federation gateway. This sounds complicated, but is actually very easy to achieve using WIF’s WS-Trust client support. The sequence is like this: Request a token from an identity provider. You use some “bootstrap” credential for that like Windows integrated, UserName or a client certificate. The realm used for this request is the identifier of the Resource STS/federation gateway. Use the resulting token to request a new token from the Resource STS/federation gateway. The realm for this request would be the ultimate service you want to talk to. Use this resulting token to authenticate against the ultimate service. Step 1 is very much the same as the code I have shown in the last post. In the following snippet, I use a client certificate to get a token from my STS: private static SecurityToken GetIdPToken() {     var factory = new WSTrustChannelFactory(         new CertificateWSTrustBinding(SecurityMode.TransportWithMessageCredential,         idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       factory.Credentials.ClientCertificate.SetCertificate(         StoreLocation.CurrentUser,         StoreName.My,         X509FindType.FindBySubjectDistinguishedName,         "CN=Client");       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(rstsRealm),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } To use a token to request another token is slightly different. First the IssuedTokenWSTrustBinding is used and second the channel factory extension methods are used to send the identity provider token to the Resource STS: private static SecurityToken GetRSTSToken(SecurityToken idpToken) {     var binding = new IssuedTokenWSTrustBinding();     binding.SecurityMode = SecurityMode.TransportWithMessageCredential;       var factory = new WSTrustChannelFactory(         binding,         rstsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     factory.Credentials.SupportInteractive = false;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcRealm),         KeyType = KeyTypes.Symmetric     };       factory.ConfigureChannelFactory();     var channel = factory.CreateChannelWithIssuedToken(idpToken);     return channel.Issue(rst); } For this particular case I chose an ADFS endpoint for issued token authentication (see part 1 for more background). Calling the service now works exactly like I described in my last post. You may now wonder if the same thing can be also achieved using configuration only – absolutely. But there are some gotchas. First of all the configuration files becomes quite complex. As we discussed in part 4, the bindings must be nested for WCF to unwind the token call-stack. But in this case svcutil cannot resolve the first hop since it cannot use metadata to inspect the identity provider. This binding must be supplied manually. The other issue is around the value for the realm/appliesTo when requesting a token for the R-STS. Using the manual approach you have full control over that parameter and you can simply use the R-STS issuer URI. Using the configuration approach, the exact address of the R-STS endpoint will be used. This means that you may have to register multiple R-STS endpoints in the identity provider. Another issue you will run into is, that ADFS does only accepts its configured issuer URI as a known realm by default. You’d have to manually add more audience URIs for the specific endpoints using the ADFS Powershell commandlets. I prefer the “manual” approach. That’s it. Hope this is useful information.

    Read the article

  • Oracle(R) Buys Pre-Paid Software Assets From eServGlobal

    - by Paulo Folgado
    Oracle to Deliver Scalable Carrier-Grade Pre-Paid Solution Based on Open, Flexible IT-Based Platform News Facts ·        Oracle has agreed to acquire certain pre-paid assets of eServGlobal, a provider of advanced IT-based, pre-paid charging solutions for the communications industry. ·        eServGlobal's Universal Service Platform (USP) includes a pre-paid charging application, a network-services platform and a messaging gateway. The ChargingMax, NumberMax, uVOMS, MessageMax, PromoMax Express and Social Relationship Management software currently supports more than 25 tier-one customers including the world's largest IT-based installation of pre-paid services. ·        The combination of Oracle Communications Billing and Revenue Management and the USP applications is expected to accelerate the shift from network- to IT-based pre-paid systems by providing the first convergent, open IT-based platform from a leading business software and hardware systems company. ·        Customers are expected to benefit from traditional carrier-grade, pre-paid service authorization with IT-grade flexibility that supports any service or network, is easier to deploy and maintain and delivers an overall lower total cost of ownership. ·        The transaction is expected to close in the second half of this year. Supporting Quote ·        "The majority of mobile phone users worldwide use pre-paid plans, and that number is growing exponentially. Oracle Communications applications combined with the pre-paid software assets from eServGlobal will provide our customers with highly available and scalable carrier-grade, pre-paid software on an open, convergent platform. This will enable our customers to deliver traditional pre-paid voice services and easily introduce hybrid pre-paid and post-paid plans with targeted pricing, promotions and service bundles that include voice, data and network services," said Liam Maxwell, vice president of products, Oracle Communications. Supporting Resources About Oracle and eServGlobal USP General Presentation FAQ

    Read the article

  • Grails project structure

    - by Martin Janicek
    Good news everyone! I've changed the structure of the Grails project as requested in the issue 160028 and it should be much more user friendly than before. There are actually two things I've fixed/implemented. First of all the source folders are finally represented in the same way as for the Java projects (which means instead of the folder based structure it uses package based structure). The difference can be seen on pictures bellow:    Folder based structure:                                                 Package based structure: Second, minor and quite related change could be seen on those pictures too. There are different icons for different structures. For example Views and Layouts items are folder based, Domain Classes are package based and so on.

    Read the article

  • New .Net Authentication in 4.5.1

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/11/05/new-.net-authentication-in-4.5.1.aspxThere has been a lot of traffic on my post about Simple Membership that came with the File new Project MVC 4 in 2012. I was reading the release notes for Visual Studio 2013 and .Net 4.5.1 and it mentioned a new/updated Authentication approach. “ASP.NET Identity is the new membership system for ASP.NET applications. ASP.NET Identity makes it easy to integrate user-specific profile data with application data. ASP.NET Identity also allows you to choose the persistence model for user profiles in your application. You can store the data in a SQL Server database or another data store, including NoSQL data stores such as Windows Azure Storage Tables” There’s a great page on the asp.net site that gives an introduction, overview, how to use it, and how to migrate to it. I won’t be doing a new project for awhile at work, but I’ll definitely be looking into this more when I get the time.

    Read the article

  • ??????(??????????)

    - by ???02
    ??????(??????????)??????????????????????????????????????????????????????????????????????????????????·??????????????????????????????????????Web?????·???????????????????????????????????????????????????????????????????·???????????????????????????????????????????????????????????????????????????????? Oracle Adaptive Access Manager????·????????????????????? Oracle Identity Federation????????????????Oracle Entitlements Server ????????????·??????????????????????????? -????·?????-?????????????Oracle Adaptive Access Manager -- ??????????????????????????????Oracle Adaptive Access Manager???????????????????????????????????????????????????????·???????????????????????????????????????????????(????)?????????????????????????????ID???????????????????????????????????(1)???????????????????????????????????????????·?????(2)????????????????????????????????????????????????????????????(3)??????????????????Web??????????(????)?????????????(4)?????????????????????????????????Web?????????????????????????????????????Oracle Identity Federation -- ?????????????Oracle Identity Federation???????????????????????????????????·????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????IT??????????????????(1)????????:??????????????????????·???????????????????????????:SAML?ID-FF?WS-Federation?Windows CardSpace(2)??????????????????????????????????????·???????????????????Oracle Entitlements Server -- ????????????Oracle Entitlements Server????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????UI??????????????????????????????????????????????????????(1)OASIS XACML????????????????????(2)??????????????????????????????????????????????????(3)???????????????????????????????????????(4)????????????????????????????????????????Oracle OpenSSO Security Token Service -- ?????????????????Oracle OpenSSO Security Token Service(OpenSSO STS)????????????????Web ???????????????????????????(????????????)????????????????OASIS WS-Trust ????????????????????(issurance)???(renewal)???(validation)??????????????(1)WS-Trust????????????????????(issuance)???(renewal)???(validation)???(2)Web???????ID???????????????????(3)?????????????????? ?????? Oracle Direct

    Read the article

  • How to remove a node based on contents of a subnode and then convert to CSV with headers intact

    - by Morris Cox
    I'm downloading a 10MB zipped XML file with wget, unzipping it to 40MB, trying to weed out test and expired entries (if the text of a certain node is "This is a test opportunity. Please DO NOT apply!" or if is in the past), and then convert to CSV. However, I get this error: PHP Notice: Array to string conversion in /home/morris/projects/grantsgov/xml2csv.php on line 46 I get Array errors because some entries in the XML file have more than one occurrence of a node (different text contents). The test entries are still present. Contents of grantsgov.sh: #!/bin/bash wget --clobber "http://www.grants.gov/search/downloadXML.do;jsessionid=1n7GNpNF2tKZnRGLyQqf7Tl32hFJ1zndhfQpLrJJD11TTNzWMwDy!368676377?fname=GrantsDBExtract$(date +'%Y%m%d').zip" -O GrantsDBExtract$(date +"%Y%m%d").zip unzip GrantsDBExtract$(date +"%Y%m%d").zip php xml2csv.php zip GrantsDBExtracted$(date +"%Y%m%d").zip GrantsDBExtracted$(date +"%Y%m%d").csv Contents of xml2csv.php: <? $current_date = date("Ymd"); //$getfile=fopen("http://www.grants.gov/search/downloadXML.do;jsessionid=GqJNNmdLyJyMlsQqTzS2KdzgT5NMdhPp0QhG946JTmHzRltNTpMQ!368676377?fname=GrantsDBExtract$current_date.zip", "r"); $zip = zip_open("GrantsDBExtract$current_date.zip"); if(is_resource($zip)) { while ($zip_entry = zip_read($zip)) { $fp = fopen("./".zip_entry_name($zip_entry), "w"); if (zip_entry_open($zip, $zip_entry, "r")) { $buf = zip_entry_read($zip_entry, zip_entry_filesize($zip_entry)); fwrite($fp,"$buf"); zip_entry_close($zip_entry); fclose($fp); } } zip_close($zip); } // For future potential use $xml = new DOMDocument('1.0', 'ascii'); $xpath = new DOMXpath($xml); $xslFile = "feddata.xsl"; $filexml="GrantsDBExtract$current_date.xml"; if (file_exists($filexml)) { $xml = simplexml_load_file($filexml); echo "Loaded $filexml\n"; $xslt = new XSLTProcessor(); $xsl = new DOMDocument(); //$XSL->load('feddata.xsl', LIBXML_NOCDATA); $xsl->load($xslFile, LIBXML_NOCDATA); $xslt->importStylesheet($xsl); $xslt->transformToXML($xml); $f = fopen("GrantsDBExtracted$current_date.csv", 'w'); // create the CSV header row on the first time here $first = FALSE; $fields = array(); foreach($record as $key => $value) { $fields[] = $key; } fwrite($f,implode(";",$fields)."\n"); foreach ($xml->FundingOppSynopsis as $fos) { fputcsv($f, get_object_vars($fos),',','"'); } } fclose($f); } else { exit('Failed to open file.'); } ?> Contents of feddata.xsl: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="FundingOppSynopsis[AgencyMailingAddress = 'This is a test opportunity. Please DO NOT apply!']"> </xsl:template> </xsl:stylesheet><xsl:strip-space elements="*"/> Part of the XML file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE Grants SYSTEM "http://www.grants.gov/search/dtd/XMLExtract.dtd"> <Grants> <FundingOppSynopsis> <PostDate>08312007</PostDate> <UserID>None</UserID> <Password>None</Password> <FundingInstrumentType>CA</FundingInstrumentType> <FundingActivityCategory>DPR</FundingActivityCategory> <OtherCategoryExplanation>This is a test opportunity. Please DO NOT apply!</OtherCategoryExplanation> <NumberOfAwards>5</NumberOfAwards> <EstimatedFunding>4</EstimatedFunding> <AwardCeiling>2</AwardCeiling> <AwardFloor>1</AwardFloor> <AgencyMailingAddress>This is a test opportunity. Please DO NOT apply!</AgencyMailingAddress> <FundingOppTitle>This is a test opportunity. Please DO NOT apply!</FundingOppTitle> <FundingOppNumber>IVV-08312007-RG-OPP5</FundingOppNumber> <ApplicationsDueDate>09102007</ApplicationsDueDate> <ApplicationsDueDateExplanation>This is a test opportunity. Please DO NOT apply!</ApplicationsDueDateExplanation> <ArchiveDate>10102007</ArchiveDate> <Location>None</Location> <Office>None</Office> <Agency>None</Agency> <FundingOppDescription>This is a test opportunity. Please DO NOT apply!</FundingOppDescription> <CFDANumber>000000</CFDANumber> <EligibilityCategory>21</EligibilityCategory> <AdditionalEligibilityInfo>This is a test opportunity. Please DO NOT apply!</AdditionalEligibilityInfo> <CostSharing>N</CostSharing> <ObtainFundingOppText FundingOppURL="">Not Available</ObtainFundingOppText> <AgencyContact AgencyEmailDescriptor="This is a test opportunity. Please DO NOT apply!" AgencyEmailAddress="This is a test opportunity. Please DO NOT apply!">This is a test opportunity. Please DO NOT apply!</AgencyContact> </FundingOppSynopsis> How can I fix the error and remove expired entries (based on ArchiveDate)? I suspect the second to last line in feddata.xsl needs to be fixed.

    Read the article

  • Add UIView and UILabel to UICollectionViewCell. Then Segue based on clicked cell index

    - by JetSet
    I am new to collection views in Objective-C. Can anyone tell me why I can't see my UILabel embedded in the transparent UIView and the best way to resolve. I want to also segue from the cell to several various UIViewControllers based on the selected index cell. I am using GitHub project https://github.com/mayuur/MJParallaxCollectionView Overall, in MJRootViewController.m I wanted to add a UIView with a transparency and a UILabel with details of the cell from a array. MJCollectionViewCell.h // MJCollectionViewCell.h // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import <UIKit/UIKit.h> #define IMAGE_HEIGHT 200 #define IMAGE_OFFSET_SPEED 25 @interface MJCollectionViewCell : UICollectionViewCell /* image used in the cell which will be having the parallax effect */ @property (nonatomic, strong, readwrite) UIImage *image; /* Image will always animate according to the imageOffset provided. Higher the value means higher offset for the image */ @property (nonatomic, assign, readwrite) CGPoint imageOffset; //@property (nonatomic,readwrite) UILabel *textLabel; @property (weak, nonatomic) IBOutlet UILabel *textLabel; @property (nonatomic,readwrite) NSString *text; @property(nonatomic,readwrite) CGFloat x,y,width,height; @property (nonatomic,readwrite) NSInteger lineSpacing; @property (nonatomic, strong) IBOutlet UIView* overlayView; @end MJCollectionViewCell.m // // MJCollectionViewCell.m // RCCPeakableImageSample // // Created by Mayur on 4/1/14. // Copyright (c) 2014 RCCBox. All rights reserved. // #import "MJCollectionViewCell.h" @interface MJCollectionViewCell() @property (nonatomic, strong, readwrite) UIImageView *MJImageView; @end @implementation MJCollectionViewCell - (instancetype)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) [self setupImageView]; return self; } - (id)initWithCoder:(NSCoder *)aDecoder { self = [super initWithCoder:aDecoder]; if (self) [self setupImageView]; return self; } /* // Only override drawRect: if you perform custom drawing. // An empty implementation adversely affects performance during animation. - (void)drawRect:(CGRect)rect { // Drawing code } */ #pragma mark - Setup Method - (void)setupImageView { // Clip subviews self.clipsToBounds = YES; // Add image subview self.MJImageView = [[UIImageView alloc] initWithFrame:CGRectMake(self.bounds.origin.x, self.bounds.origin.y, self.bounds.size.width, IMAGE_HEIGHT)]; self.MJImageView.backgroundColor = [UIColor redColor]; self.MJImageView.contentMode = UIViewContentModeScaleAspectFill; self.MJImageView.clipsToBounds = NO; [self addSubview:self.MJImageView]; } # pragma mark - Setters - (void)setImage:(UIImage *)image { // Store image self.MJImageView.image = image; // Update padding [self setImageOffset:self.imageOffset]; } - (void)setImageOffset:(CGPoint)imageOffset { // Store padding value _imageOffset = imageOffset; // Grow image view CGRect frame = self.MJImageView.bounds; CGRect offsetFrame = CGRectOffset(frame, _imageOffset.x, _imageOffset.y); self.MJImageView.frame = offsetFrame; } - (void)setText:(NSString *)text{ _text=text; if (!self.textLabel) { CGFloat realH=self.height*2/3-self.lineSpacing; CGFloat latoA=realH/3; // self.textLabel=[[UILabel alloc] initWithFrame:CGRectMake(10,latoA/2, self.width-20, realH)]; self.textLabel.layer.anchorPoint=CGPointMake(.5, .5); self.textLabel.font=[UIFont fontWithName:@"HelveticaNeue-ultralight" size:38]; self.textLabel.numberOfLines=3; self.textLabel.textColor=[UIColor whiteColor]; self.textLabel.shadowColor=[UIColor blackColor]; self.textLabel.shadowOffset=CGSizeMake(1, 1); self.textLabel.transform=CGAffineTransformMakeRotation(-(asin(latoA/(sqrt(self.width*self.width+latoA*latoA))))); [self addSubview:self.textLabel]; } self.textLabel.text=text; } @end MJViewController.h // // MJViewController.h // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import <UIKit/UIKit.h> @interface MJRootViewController : UIViewController{ NSInteger choosed; } @end MJViewController.m // // MJViewController.m // ParallaxImages // // Created by Mayur on 4/1/14. // Copyright (c) 2014 sky. All rights reserved. // #import "MJRootViewController.h" #import "MJCollectionViewCell.h" @interface MJRootViewController () <UICollectionViewDataSource, UICollectionViewDelegate, UIScrollViewDelegate> @property (weak, nonatomic) IBOutlet UICollectionView *parallaxCollectionView; @property (nonatomic, strong) NSMutableArray* images; @end @implementation MJRootViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. //self.navigationController.navigationBarHidden=YES; // Fill image array with images NSUInteger index; for (index = 0; index < 14; ++index) { // Setup image name NSString *name = [NSString stringWithFormat:@"image%03ld.jpg", (unsigned long)index]; if(!self.images) self.images = [NSMutableArray arrayWithCapacity:0]; [self.images addObject:name]; } [self.parallaxCollectionView reloadData]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } #pragma mark - UICollectionViewDatasource Methods - (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section { return self.images.count; } - (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath { MJCollectionViewCell* cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"MJCell" forIndexPath:indexPath]; //get image name and assign NSString* imageName = [self.images objectAtIndex:indexPath.item]; cell.image = [UIImage imageNamed:imageName]; //set offset accordingly CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - cell.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; cell.imageOffset = CGPointMake(0.0f, yOffset); NSString *text; NSInteger index=choosed>=0 ? choosed : indexPath.row%5; switch (index) { case 0: text=@"I am the home cell..."; break; case 1: text=@"I am next..."; break; case 2: text=@"Cell 3..."; break; case 3: text=@"Cell 4..."; break; case 4: text=@"The last cell"; break; default: break; } cell.text=text; cell.overlayView.backgroundColor = [UIColor colorWithWhite:0.0f alpha:0.4f]; //cell.textLabel.text = @"Label showing"; cell.textLabel.font = [UIFont boldSystemFontOfSize:22.0f]; cell.textLabel.textColor = [UIColor whiteColor]; //This is another attempt to display the label by using tags. //UILabel* label = (UILabel*)[cell viewWithTag:1]; //label.text = @"Label works"; return cell; } #pragma mark - UIScrollViewdelegate methods - (void)scrollViewDidScroll:(UIScrollView *)scrollView { for(MJCollectionViewCell *view in self.parallaxCollectionView.visibleCells) { CGFloat yOffset = ((self.parallaxCollectionView.contentOffset.y - view.frame.origin.y) / IMAGE_HEIGHT) * IMAGE_OFFSET_SPEED; view.imageOffset = CGPointMake(0.0f, yOffset); } } @end

    Read the article

  • Using ext4 in VMware machine

    First of all, using a journaling filesystems like NTFS, ext4, XFS, or JFS (not to name all of them) is a very good idea and nowadays unthinkable not to do. Linux offers a good variety of different option as journaling filesystem for your system. Since years I am using SGI's XFS and I am pretty confident with stability, performance and liability of the system. In earlier years I had to struggle with incompatibilities between XFS and the boot loader. Using an ext2 formatted /boot solved this issue. But, wow, that is ages ago! Lately, I had to setup a fresh Lucid Lynx (Ubuntu 10.04 LTS) system for a change of our internal groupware / messaging system. Therefore, I fired up a new virtual machine with almost standard configuration in VMware Server and run through our network-based PXE boot and installation procedure. At a certain step in this process, Ubuntu asks you about the partitioning of your hard drive(s). Honestly, I have to say that only out of curiousity I sticked to the "default" suggestion and gave my faith and trust into the Ubuntu installation routine... Resulting to have an ext4 based root mount point ( / ). The rest of the installation went on without further concerns or worries. Note:I really can't remember why I chose to go away from my favourite... Well, it should turn out to be the wrong decision after all. Ok, let's continue the story about ext4 in a VMware based virtual machine. After some hours installing additional packages and configuring the new system using LDAP for general authentication and login, I had an "out-of-the-box" usable enterprise messaging system based on Zarafa 6.40 Community Edition inclusive proper SSL-based Webaccess interface and Z-Push extension for ActiveSync with my Nokia mobile. Straightforward and pretty nice for the time spent on the setup. Having priority on other tasks I let the system just running and didn't pay any further attention at all. Until I run into an upgrade of "Mail for Exchange" on Symbian OS. My mobile did not bother me at all with the upgrade and everything went smooth, but trying to re-establish the ActiveSync connection to the Zarafa messaging system resulted in a frustating situation. So, I shifted my focus back to the Linux system and I was amazed to figure out that the root had been remounted readonly due to hard drive failures or at least ext4 reported errors. Firing up Google only confirmed my concerns and it seems that using ext4 for VMware based virtual machines does not look like a stable and reliable candidate to me. You might consider reading those external resources: ext4 fs corruption under VMWare Server 2.01Bug #389555 - ext4 filesystem corruption Well, I learned my lesson and ext{2|3|4} based filesystems are not going to be used on any of my Linux systems or customer installations in the future. Addendum: I did not try this setup in other virtualization environments like VirtualBox, qemu, kvm, Xen, etc.

    Read the article

  • WebCenter Customer Spotlight: Marvel

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryMarvel Entertainment, LLC (Marvel) is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years. The customer wanted to optimize their brand licensing process, so Marvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. Marvel and their brand licensees have  now complete visibility of brand license operations including the history of approval request and related content.  Company OverviewMarvel Entertainment, LLC (Marvel) a wholly-owned subsidiary of The Walt Disney Company, is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years.  Marvel utilizes its character franchises in entertainment, licensing and publishing.   Sample  characters:    - Spider-Man    - Iron Man    - Captain America    - X-MEN    - Thor    - Avengers    - And a host of others  Business ChallengesMarvel wanted to optimize their brand licensing process for their characters and had following business requirements : Facilitating content worldwide Scalable and flexible infrastructure to manage multiple content types and huge file sizes Optimize the licensing process workflow trough automatic notifications, tracking reviews, issuing approvals, etc. Solution DeployedMarvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. The internal users can now manage all digital assets related to a character trough proper categorization of all items, workflow based review and approval of branding styles and a powerful search and retrieval service. The licensees of Marvel brands can now online develop and submit  concepts and prototypes which are reviewed and approved using a collaborative process. Business ResultMarvel and their brand licensees have now complete visibility of brand license operations including the history of approval request and related content. The character brand related content is now in the right place, at the right time at the user's fingertips with highly improved quality. Additional Information Marvel Open World Presentation Oracle WebCenter Content

    Read the article

  • List of available whitepapers as at 04 May 2010

    - by Anthony Shorten
    The following table lists the whitepapers available, from My Oracle Support, for any Oracle Utilities Application Framework based product: KB Id Document Title Contents 559880.1 ConfigLab Design Guidelines Whitepaper outlining how to design and implement a ConfigLab solution. 560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers.  560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items.  Release Management - Packaging configuration items into a release.  Distribution - Distribution and installation of  releases across environments  Change Management - Generic change management processes for product implementations. Status Accounting -Status reporting techniques using product facilities.  Defect Management -Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview Whitepaper summarizing the security facilities in the framework. Updated for OUAF 4.0.1 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Whitepaper summarizing how to integrate an external LDAP based security repository with the framework.  789060.1 Oracle Utilities Application Framework Integration Overview Whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products Whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper oulines the common and best practices implemented by sites all over the world. Updated for OUAF 4.0.1 856854.1 Technical Best Practices V1 Addendum  Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Updated for OUAF 4.0.1 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Orade Identity Manager used to provision user and user group secuity information 1068958.1 Production Environment Configuration Guidelines (New!) Whitepaper outlining common production level settings for the products

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >