Search Results

Search found 17754 results on 711 pages for 'field description'.

Page 369/711 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • How to launch LOV and Date dialogs using the keyboard

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Using the ADF Faces JavaScript API, developers can listen for user keyboard input in input components to filter or respond to specific characters or key combination. The JavaScript shown below can be used with an af:clientListener tag on af:inputListOfValues or af:inputDate. At runtime, the JavaScript code determines the component type it is executed on and either opens the LOV dialog or the input Date popup.   <af:resource type="javascript">     /**     * function to launch dialog if cursor is in LOV or     * input date field     * @param evt argument to capture the AdfUIInputEvent object     */   function launchPopUpUsingF8(evt) {      var component = evt.getSource();      if (evt.getKeyCode() == AdfKeyStroke.F8_KEY) {      //check for input LOV component        if (component.getTypeName() == 'AdfRichInputListOfValues') {            AdfLaunchPopupEvent.queue(component, true);            //event is handled on the client. Server does not need            //to be notified            evt.cancel();          }         //check for input Date component               else if (component.getTypeName() == 'AdfRichInputDate') {           //the inputDate af:popup component ID always is ::pop           var popupClientId = component.getAbsoluteLocator() + '::pop';           var popup = component.findComponent(popupClientId);           var hints = {align : AdfRichPopup.ALIGN_END_AFTER,                        alignId : component.getAbsoluteLocator()};           popup.show(hints);           //event is handled on the client. Server does not need           //to be notified           evt.cancel();        }              } } </af:resource> The af:clientListener that calls the JavaScript is added as shown below. <af:inputDate label="Label 1" id="id1">    <af:clientListener method="launchPopUpUsingF8" type="keyDown"/> </af:inputDate> As you may have noticed, the call to open the popup is different for the af:inputListOfValues and the af:inputDate. For the list of values component, an ADF Faces AdfLaunchPopupEvent is queued with the LOV component passed s an argument. Launching the input date popup is a bit more complicate and requires you to lookup the implicit popup dialog and to open it manually. Because the popup is opened manually using the show() method on the af:popup component, the alignment of the dialog also needs to be handled manually. For this, the popup component specifies alignment hints, that for the ALIGN_END_AFTER hint aligns the dialog at the end and below the date component. The align Id hint specifies the component the dialog is relatively positioned to, which of course should be the input date field. The ADF Faces JavaScript API and how to use it is further explained in the Using JavaScript in ADF Faces Rich Client Applications whitepaper available from the Oracle Technology Network (OTN) http://www.oracle.com/technetwork/developer-tools/jdev/1-2011-javascript-302460.pdf An ADF Insider recording about JavaScript in ADF Faces can be watched from here http://download.oracle.com/otn_hosted_doc/jdeveloper/11gdemos/adf-insider-javascript/adf-insider-javascript.html

    Read the article

  • readonly keyword

    - by nmarun
    This is something new that I learned about the readonly keyword. Have a look at the following class: 1: public class MyClass 2: { 3: public string Name { get; set; } 4: public int Age { get; set; } 5:  6: private readonly double Delta; 7:  8: public MyClass() 9: { 10: Initializer(); 11: } 12:  13: public MyClass(string name = "", int age = 0) 14: { 15: Name = name; 16: Age = age; 17: Initializer(); 18: } 19:  20: private void Initializer() 21: { 22: Delta = 0.2; 23: } 24: } I have a couple of public properties and a private readonly member. There are two constructors – one that doesn’t take any parameters and the other takes two parameters to initialize the public properties. I’m also calling the Initializer method in both constructors to initialize the readonly member. Now when I build this, the code breaks and the Error window says: “A readonly field cannot be assigned to (except in a constructor or a variable initializer)” Two things after I read this message: It’s such a negative statement. I’d prefer something like: “A readonly field can be assigned to (or initialized) only in a constructor or through a variable initializer” But in my defense, I AM assigning it in a constructor (only indirectly). All I’m doing is creating a method that does it and calling it in a constructor. Turns out, .net was not ‘frameworked’ this way. We need to have the member initialized directly in the constructor. If you have multiple constructors, you can just use the ‘this’ keyword on all except the default constructors to call the default constructor. This default constructor can then initialize your readonly members. This will ensure you’re not repeating the code in multiple places. A snippet of what I’m talking can be seen below: 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha; 10: private readonly int Beta; 11: private readonly double Delta; 12: private readonly double Gamma; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: } 26: } See the syntax in line 22 and you’ll know what I’m talking about. So the default constructor gets called before the one in line 22. These are known as constructor initializers and they allow one constructor to call another. The other ‘myth’ I had about readonly members is that you can set it’s value only once. This was busted as well (I recall Adam and Jamie’s show). Say you’ve initialized the readonly member through a variable initializer. You can over-write this value in any of the constructors any number of times. 1: public class Person 2: { 3: public int UniqueNumber { get; set; } 4: public string Name { get; set; } 5: public int Age { get; set; } 6: public DateTime DateOfBirth { get; set; } 7: public string InvoiceNumber { get; set; } 8:  9: private readonly string Alpha = "asdf"; 10: private readonly int Beta = 15; 11: private readonly double Delta = 0.077; 12: private readonly double Gamma = 1.0; 13:  14: public Person() 15: { 16: Alpha = "FDSA"; 17: Beta = 2; 18: Delta = 3.0; 19: Gamma = 0.0989; 20: } 21:  22: public Person(int uniqueNumber) : this() 23: { 24: UniqueNumber = uniqueNumber; 25: Beta = 3; 26: } 27:  28: public Person(string name, DateTime dob) : this() 29: { 30: Name = name; 31: DateOfBirth = dob; 32:  33: Alpha = ";LKJ"; 34: Gamma = 0.0898; 35: } 36:  37: public Person(int uniqueNumber, string name, int age, DateTime dob, string invoiceNumber) : this() 38: { 39: UniqueNumber = uniqueNumber; 40: Name = name; 41: Age = age; 42: DateOfBirth = dob; 43: InvoiceNumber = invoiceNumber; 44:  45: Alpha = "QWER"; 46: Beta = 5; 47: Delta = 1.0; 48: Gamma = 0.0; 49: } 50: } In the above example, every constructor over-writes the values for the readonly members. This is perfectly valid. There is a possibility that based on the way the object is instantiated, the readonly member will have a different value. Well, that’s all I have for today and read this as it’s on a related topic.

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • Creating a podcast feed for iTunes & BlackBerry users using WCF Syndication

    - by brian_ritchie
     In my previous post, I showed how to create a RSS feed using WCF Syndication.  Next, I'll show how to add the additional tags needed to turn a RSS feed into an iTunes podcast.   A podcast is merely a RSS feed with some special characteristics: iTunes RSS tags.  These are additional tags beyond the standard RSS spec.  Apple has a good page on the requirements. Audio file enclosure.  This is a link to the audio file (such as mp3) hosted by your site.  Apple doesn't host the audio, they just read the meta-data from the RSS feed into their system. The SyndicationFeed class supports both AttributeExtensions & ElementExtensions to add custom tags to the RSS feeds. A couple of points of interest in the code below: The imageUrl below provides the album cover for iTunes (170px × 170px) Each SyndicationItem corresponds to an audio episode in your podcast So, here's the code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: XNamespace itunesNS = "http://www.itunes.com/dtds/podcast-1.0.dtd"; 2: string prefix = "itunes"; 3:   4: var feed = new SyndicationFeed(title, description, new Uri(link)); 5: feed.Categories.Add(new SyndicationCategory(category)); 6: feed.AttributeExtensions.Add(new XmlQualifiedName(prefix, 7: "http://www.w3.org/2000/xmlns/"), itunesNS.NamespaceName); 8: feed.Copyright = new TextSyndicationContent(copyright); 9: feed.Language = "en-us"; 10: feed.Copyright = new TextSyndicationContent(DateTime.Now.Year + " " + ownerName); 11: feed.ImageUrl = new Uri(imageUrl); 12: feed.LastUpdatedTime = DateTime.Now; 13: feed.Authors.Add(new SyndicationPerson() {Name=ownerName, Email=ownerEmail }); 14: var extensions = feed.ElementExtensions; 15: extensions.Add(new XElement(itunesNS + "subtitle", subTitle).CreateReader()); 16: extensions.Add(new XElement(itunesNS + "image", 17: new XAttribute("href", imageUrl)).CreateReader()); 18: extensions.Add(new XElement(itunesNS + "author", ownerName).CreateReader()); 19: extensions.Add(new XElement(itunesNS + "summary", description).CreateReader()); 20: extensions.Add(new XElement(itunesNS + "category", 21: new XAttribute("text", category), 22: new XElement(itunesNS + "category", 23: new XAttribute("text", subCategory))).CreateReader()); 24: extensions.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 25: extensions.Add(new XDocument( 26: new XElement(itunesNS + "owner", 27: new XElement(itunesNS + "name", ownerName), 28: new XElement(itunesNS + "email", ownerEmail))).CreateReader()); 29:   30: var feedItems = new List<SyndicationItem>(); 31: foreach (var i in Items) 32: { 33: var item = new SyndicationItem(i.title, null, new Uri(link)); 34: item.Summary = new TextSyndicationContent(i.summary); 35: item.Id = i.id; 36: if (i.publishedDate != null) 37: item.PublishDate = (DateTimeOffset)i.publishedDate; 38: item.Links.Add(new SyndicationLink() { 39: Title = i.title, Uri = new Uri(link), 40: Length = i.size, MediaType = i.mediaType }); 41: var itemExt = item.ElementExtensions; 42: itemExt.Add(new XElement(itunesNS + "subtitle", i.subTitle).CreateReader()); 43: itemExt.Add(new XElement(itunesNS + "summary", i.summary).CreateReader()); 44: itemExt.Add(new XElement(itunesNS + "duration", 45: string.Format("{0}:{1:00}:{2:00}", 46: i.duration.Hours, i.duration.Minutes, i.duration.Seconds) 47: ).CreateReader()); 48: itemExt.Add(new XElement(itunesNS + "keywords", i.keywords).CreateReader()); 49: itemExt.Add(new XElement(itunesNS + "explicit", "no").CreateReader()); 50: itemExt.Add(new XElement("enclosure", new XAttribute("url", i.url), 51: new XAttribute("length", i.size), new XAttribute("type", i.mediaType))); 52: feedItems.Add(item); 53: } 54:   55: feed.Items = feedItems; If you're hosting your podcast feed within a MVC project, you can use the code from my previous post to stream it. Once you have created your feed, you can use the Feed Validator tool to make sure it is up to spec.  Or you can use iTunes: Launch iTunes. In the Advanced menu, select Subscribe to Podcast. Enter your feed URL in the text box and click OK. After you've verified your feed is solid & good to go, you can submit it to iTunes.  Launch iTunes. In the left navigation column, click on iTunes Store to open the store. Once the store loads, click on Podcasts along the top navigation bar to go to the Podcasts page. In the right column of the Podcasts page, click on the Submit a Podcast link. Follow the instructions on the Submit a Podcast page. Here are the full instructions.  Once they have approved your podcast, it will be available within iTunes. RIM has also gotten into the podcasting business...which is great for BlackBerry users.  They accept the same enhanced-RSS feed that iTunes uses, so just create an account with them & submit the feed's URL.  It goes through a similar approval process to iTunes.  BlackBerry users must be on BlackBerry 6 OS or download the Podcast App from App World. In my next post, I'll show how to build the podcast feed dynamically from the ID3 tags within the MP3 files.

    Read the article

  • Feedback Filtration&ndash;Processing Negative Comments for Positive Gains

    - by D'Arcy Lussier
    After doing 7 conferences, 5 code camps, and countless user group events, I feel that this is a post I need to write. I actually toyed with other names for this post, however those names would just lend itself to the type of behaviour I want people to avoid – the reactionary, emotional response that speaks to some deeper issue beyond immediate facts and context. Humans are incredibly complex creatures. We’re also emotional, which serves us well in certain situations but can hinder us in others. Those of us in leadership build up a thick skin because we tend to encounter those reactionary, emotional responses more often, and we’re held to a higher standard because of our positions. While we could react with emotion ourselves, as the saying goes – fighting fire with fire just makes a bigger fire. So in this post I’ll share my thought process for dealing with negative feedback/comments and how you can still get value from them. The Thought Process Let’s take a real-world example. This week I held the Prairie IT Pro & Dev Con event. We’ve gotten a lot of session feedback already, most of it overwhelmingly positive. But some not so much – and some to an extreme I rarely see but isn’t entirely surprising to me. So here’s the example from a person we’ll refer to as Mr. Horrible: How was the speaker? Horrible! Worst speaker ever! Did the session meet your expectations? Hard to tell, speaker ruined it. Other Comments: DO NOT bring this speaker back! He was at this conference last year and I hoped enough negative feedback would have taught you to not bring him back...obviously not...I will not return to this conference next year if this speaker is brought back. Now those are very strong words. “Worst speaker ever!” “Speaker ruined it” “I will not return to this conference next year if the speaker is brought back”. The speakers I invite to speak at my conference are not just presenters but friends and colleagues. When I see this, my initial reaction is of course very emotional: I get defensive, I get angry, I get offended. So that’s where the process kicks in. Step 1 – Take a Deep Breath Take a deep breath, calm down, and walk away from the keyboard. I didn’t do that recently during an email convo between some colleagues and it ended up in my reacting emotionally on Twitter – did I mention those colleagues follow my Twitter feed? Yes, I ate some crow. Ok, now that we’re calm, let’s move on to step 2. Step 2 – Strip off the Emotion We need to take off the emotion that people wrap their words in and identify the root issues. For instance, if I see: “I hated this session, the presenter was horrible! He spoke so fast I couldn’t make out what he was saying!” then I drop off the personal emoting (“I hated…”) and the personal attack (“the presenter was horrible”) and focus on the real issue this person had – that the speaker was talking too fast. Now we have a root cause of the displeasure. However, we’re also dealing with humans who are all very different. Before I call up the speaker to talk about his speaking pace, I need to do some other things first. Back to our Mr. Horrible example, I don’t really have much to go on. There’s no details of how the speaker “ruined” the session or why he’s the “worst speaker ever”. In this case, the next step is crucial. Step 3 – Validate the Feedback When I tell people that we really like getting feedback for the sessions, I really really mean it. Not just because we want to hear what individuals have to say but also because we want to know what the group thought. When a piece of negative feedback comes in, I validate it against the group. So with the speaker Mr. Horrible commented on, I go to the feedback and look at other people’s responses: 2 x Excellent 1 x Alright 1 x Not Great 1 x Horrible (our feedback guy) That’s interesting, it’s a bit all over the board. If we look at the comments more we find that the people who rated the speaker excellent liked the presentation style and found the content valuable. The one guy who said “Not Great” even commented that there wasn’t anything really wrong with the presentation, he just wasn’t excited about it. In that light, I can try to make a few assumptions: - Mr. Horrible didn’t like the speakers presentation style - Mr. Horrible was expecting something else that wasn’t communicated properly in the session description - Mr. Horrible, for whatever reason, just didn’t like this presenter Now if the feedback was overwhelmingly negative, there’s a different pattern – one that validates the negative feedback. Regardless, I never take something at face value. Even if I see really good feedback, I never get too happy until I see that there’s a group trend towards the positive. Step 4 – Action Plan Once I’ve validated the feedback, then I need to come up with an action plan around it. Let’s go back to the other example I gave – the one with the speaker going too fast. I went and looked at the feedback and sure enough, other people commented that the speaker had spoken too quickly. Now I can go back to the speaker and let him know so he can get better. But what if nobody else complained about it? I’d still mention it to the speaker, but obviously one person’s opinion needs to be weighed as such. When we did PrDC Winnipeg in 2011, I surveyed the attendees about the food. Everyone raved about it…except one person. Am I going to change the menu next time for that one person while everyone else loved it? Of course not. There’s a saying – A sure way to fail is to try to please everyone. Let’s look at the Mr. Horrible example. What can I communicate to the speaker with such limited information provided in the feedback from Mr. Horrible? Well looking at the groups feedback, I can make a few suggestions: - Ensure that people understand in the session description the style of the talk - Ensure that people understand the level of detail/complexity of the talk and what prerequisite knowledge they should have I’m looking at it as possibly Mr. Horrible assumed a much more advanced talk and was disappointed, while the positive feedback by people who – from their comments – suggested this was all new to them, were thrilled with the session level. Step 5 – Follow Up For some feedback, I follow up personally. Especially with negative or constructive feedback, its important to let the person know you heard them and are making changes because of their comments. Even if their comments were emotionally charged and overtly negative, it’s still important to reach out personally and professionally. When you remove the emotion, negative comments can be the best feedback you get. Also, people have bad days. We’ve all had one of “those days” where we talked more sternly than normal to someone, or got angry at something we’d normally shrug off. We have various stresses in our lives and sometimes they seep out in odd ways. I always try to give some benefit of the doubt, and re-evaluate my view of the person after they’ve responded to my communication. But, there is such a thing as garbage feedback. What Mr. Horrible wrote is garbage. It’s mean spirited. It’s hateful. It provides nothing constructive at all. And a tell-tale sign that feedback is garbage – the person didn’t leave their name even though there was a field for it. Step 6 – Delete It Feedback must be processed in its raw form, and the end products should drive improvements. But once you’ve figured out what those things are, you shouldn’t leave raw feedback lying around. They are snapshots in time that taken alone can be damaging. Also, you should never rest on past praise. In a future blog post, I’m going to talk about how we can provide great feedback that, even when its critical, can still be constructive.

    Read the article

  • C#/.NET Little Wonders: Using &lsquo;default&rsquo; to Get Default Values

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today’s little wonder is another of those small items that can help a lot in certain situations, especially when writing generics.  In particular, it is useful in determining what the default value of a given type would be. The Problem: what’s the default value for a generic type? There comes a time when you’re writing generic code where you may want to set an item of a given generic type.  Seems simple enough, right?  We’ll let’s see! Let’s say we want to query a Dictionary<TKey, TValue> for a given key and get back the value, but if the key doesn’t exist, we’d like a default value instead of throwing an exception. So, for example, we might have a the following dictionary defined: 1: var lookup = new Dictionary<int, string> 2: { 3: { 1, "Apple" }, 4: { 2, "Orange" }, 5: { 3, "Banana" }, 6: { 4, "Pear" }, 7: { 9, "Peach" } 8: }; And using those definitions, perhaps we want to do something like this: 1: // assume a default 2: string value = "Unknown"; 3:  4: // if the item exists in dictionary, get its value 5: if (lookup.ContainsKey(5)) 6: { 7: value = lookup[5]; 8: } But that’s inefficient, because then we’re double-hashing (once for ContainsKey() and once for the indexer).  Well, to avoid the double-hashing, we could use TryGetValue() instead: 1: string value; 2:  3: // if key exists, value will be put in value, if not default it 4: if (!lookup.TryGetValue(5, out value)) 5: { 6: value = "Unknown"; 7: } But the “flow” of using of TryGetValue() can get clunky at times when you just want to assign either the value or a default to a variable.  Essentially it’s 3-ish lines (depending on formatting) for 1 assignment.  So perhaps instead we’d like to write an extension method to support a cleaner interface that will return a default if the item isn’t found: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } 17:  So this creates an extension method on Dictionary<TKey, TValue> that will attempt to get a value using the given key, and will return the defaultIfNotFound as a stand-in if the key does not exist. This code compiles, fine, but what if we would like to go one step further and allow them to specify a default if not found, or accept the default for the type?  Obviously, we could overload the method to take the default or not, but that would be duplicated code and a bit heavy for just specifying a default.  It seems reasonable that we could set the not found value to be either the default for the type, or the specified value. So what if we defaulted the type to null? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = null) // ... No, this won’t work, because only reference types (and Nullable<T> wrapped types due to syntactical sugar) can be assigned to null.  So what about a calling parameterless constructor? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = new TValue()) // ... No, this won’t work either for several reasons.  First, we’d expect a reference type to return null, not an “empty” instance.  Secondly, not all reference types have a parameter-less constructor (string for example does not).  And finally, a constructor cannot be determined at compile-time, while default values can. The Solution: default(T) – returns the default value for type T Many of us know the default keyword for its uses in switch statements as the default case.  But it has another use as well: it can return us the default value for a given type.  And since it generates the same defaults that default field initialization uses, it can be determined at compile-time as well. For example: 1: var x = default(int); // x is 0 2:  3: var y = default(bool); // y is false 4:  5: var z = default(string); // z is null 6:  7: var t = default(TimeSpan); // t is a TimeSpan with Ticks == 0 8:  9: var n = default(int?); // n is a Nullable<int> with HasValue == false Notice that for numeric types the default is 0, and for reference types the default is null.  In addition, for struct types, the value is a default-constructed struct – which simply means a struct where every field has their default value (hence 0 Ticks for TimeSpan, etc.). So using this, we could modify our code to this: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound = default(TValue)) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } Now, if defaultIfNotFound is unspecified, it will use default(TValue) which will be the default value for whatever value type the dictionary holds.  So let’s consider how we could use this: 1: lookup.GetValueOrDefault(1); // returns “Apple” 2:  3: lookup.GetValueOrDefault(5); // returns null 4:  5: lookup.GetValueOrDefault(5, “Unknown”); // returns “Unknown” 6:  Again, do not confuse a parameter-less constructor with the default value for a type.  Remember that the default value for any type is the compile-time default for any instance of that type (0 for numeric, false for bool, null for reference types, and struct will all default fields for struct).  Consider the difference: 1: // both zero 2: int i1 = default(int); 3: int i2 = new int(); 4:  5: // both “zeroed” structs 6: var dt1 = default(DateTime); 7: var dt2 = new DateTime(); 8:  9: // sb1 is null, sb2 is an “empty” string builder 10: var sb1 = default(StringBuilder()); 11: var sb2 = new StringBuilder(); So in the above code, notice that the value types all resolve the same whether using default or parameter-less construction.  This is because a value type is never null (even Nullable<T> wrapped types are never “null” in a reference sense), they will just by default contain fields with all default values. However, for reference types, the default is null and not a constructed instance.  Also it should be noted that not all classes have parameter-less constructors (string, for instance, doesn’t have one – and doesn’t need one). Summary Whenever you need to get the default value for a type, especially a generic type, consider using the default keyword.  This handy word will give you the default value for the given type at compile-time, which can then be used for initialization, optional parameters, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,default

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Unable to enable wireless on a Vostro 2520

    - by Joe
    I have a Vostro 2520 and not sure how to enable wireless on my machine. The details are given below, would appreciate any pointers to resolving this issue. lsmod returns Module Size Used by ath9k 132390 0 ath9k_common 14053 1 ath9k ath9k_hw 411151 2 ath9k,ath9k_common ath 24067 3 ath9k,ath9k_common,ath9k_hw b43 365785 0 mac80211 506816 2 ath9k,b43 cfg80211 205544 4 ath9k,ath,b43,mac80211 bcma 26696 1 b43 ssb 52752 1 b43 ndiswrapper 282628 0 ums_realtek 18248 0 usb_storage 49198 1 ums_realtek uas 18180 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_cirrus 24002 1 joydev 17693 0 parport_pc 32866 0 ppdev 17113 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep psmouse 97362 0 dell_wmi 12681 0 sparse_keymap 13890 1 dell_wmi snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_cirrus,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq wmi 19256 1 dell_wmi snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_cirrus,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device mac_hid 13253 0 i915 473240 3 drm_kms_helper 46978 1 i915 uvcvideo 72627 0 drm 242038 4 i915,drm_kms_helper videodev 98259 1 uvcvideo soundcore 15091 1 snd dell_laptop 18119 0 dcdbas 14490 1 dell_laptop i2c_algo_bit 13423 1 i915 v4l2_compat_ioctl32 17128 1 videodev snd_page_alloc 18529 2 snd_hda_intel,snd_pcm video 19596 1 i915 serio_raw 13211 0 mei 41616 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62099 0 sudo lshw -class network *-network UNCLAIMED description: Network controller product: Broadcom Corporation vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:f7c00000-f7c07fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 07 serial: 78:45:c4:a3:aa:65 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl8168e-3_0.0.4 03/27/12 ip=192.168.1.5 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:e000(size=256) memory:f0004000-f0004fff memory:f0000000-f0003fff rfkill list all 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes Output of lspci > 00:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev > 09) 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge > Graphics Controller (rev 09) 00:16.0 Communication controller: Intel > Corporation Panther Point MEI Controller #1 (rev 04) 00:1a.0 USB > controller: Intel Corporation Panther Point USB Enhanced Host > Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Panther > Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: > Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) > 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root > Port 4 (rev c4) 00:1c.5 PCI bridge: Intel Corporation Panther Point > PCI Express Root Port 6 (rev c4) 00:1d.0 USB controller: Intel > Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) > 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller > (rev 04) 00:1f.2 SATA controller: Intel Corporation Panther Point 6 > port SATA Controller [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel > Corporation Panther Point SMBus Controller (rev 04) 07:00.0 Network > controller: Broadcom Corporation Device 4365 (rev 01) 09:00.0 Ethernet > controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express > Gigabit Ethernet controller (rev 07) Output of lspci -v 0:00.0 Host bridge: Intel Corporation Ivy Bridge DRAM Controller (rev 09) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0 Capabilities: <access denied> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) (prog-if 00 [VGA controller]) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at f7800000 (64-bit, non-prefetchable) [size=4M] Memory at e0000000 (64-bit, prefetchable) [size=256M] I/O ports at f000 [size=64] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation Panther Point MEI Controller #1 (rev 04) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 42 Memory at f7d0a000 (64-bit, non-prefetchable) [size=16] Capabilities: <access denied> Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #2 (rev 04) (prog-if 20 [EHCI]) Subsystem: Dell Device 0558 Flags: bus master, medium devsel, latency 0, IRQ 16 Memory at f7d08000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation Panther Point High Definition Audio Controller (rev 04) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7d00000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 1 (rev c4) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=04, subordinate=04, sec-latency=0 Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 4 (rev c4) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=07, subordinate=07, sec-latency=0 Memory behind bridge: f7c00000-f7cfffff Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.5 PCI bridge: Intel Corporation Panther Point PCI Express Root Port 6 (rev c4) (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=09, subordinate=09, sec-latency=0 I/O behind bridge: 0000e000-0000efff Prefetchable memory behind bridge: 00000000f0000000-00000000f00fffff Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation Panther Point USB Enhanced Host Controller #1 (rev 04) (prog-if 20 [EHCI]) Subsystem: Dell Device 0558 Flags: bus master, medium devsel, latency 0, IRQ 23 Memory at f7d07000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge: Intel Corporation Panther Point LPC Controller (rev 04) Subsystem: Dell Device 0558 Flags: bus master, medium devsel, latency 0 Capabilities: <access denied> Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation Panther Point 6 port SATA Controller [AHCI mode] (rev 04) (prog-if 01 [AHCI 1.0]) Subsystem: Dell Device 0558 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 40 I/O ports at f0b0 [size=8] I/O ports at f0a0 [size=4] I/O ports at f090 [size=8] I/O ports at f080 [size=4] I/O ports at f060 [size=32] Memory at f7d06000 (32-bit, non-prefetchable) [size=2K] Capabilities: <access denied> Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation Panther Point SMBus Controller (rev 04) Subsystem: Dell Device 0558 Flags: medium devsel, IRQ 11 Memory at f7d05000 (64-bit, non-prefetchable) [size=256] I/O ports at f040 [size=32] Kernel modules: i2c-i801 07:00.0 Network controller: Broadcom Corporation Device 4365 (rev 01) Subsystem: Dell Device 0016 Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at f7c00000 (64-bit, non-prefetchable) [size=32K] Capabilities: <access denied> 09:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) Subsystem: Dell Device 0558 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at e000 [size=256] Memory at f0004000 (64-bit, prefetchable) [size=4K] Memory at f0000000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • Developing with Fluid UI – The Fluid Home Page

    - by Dave Bain
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} The first place to start with Fluid UI is with the Fluid Home Page. Sometimes it’s referred to as the landing page, but it’s formally called the Fluid Home Page. It’s delivered with PeopleTools 8.54, and the nice thing about it is, it’s a component. That’s one thing you’ll discover with Fluid UI. Fluid UI is built int PeopleTools with Fluid UI. The Home Page is a component, the tiles or grouplets are group boxes, and the search and prompt pages are just pages. It makes it easy to find things, customize and brand the applications (and of course to see what’s going on) when you can open it in AppDesigner. To see what makes a component fluid, let’s start with the Fluid Home Page. It’s a component called PT_LANDINGPAGE. You can open it in AppDesigner and see what’s unique and different about Fluid UI. If you open the Component Properties dialog, you’ll see a new tab called Fluid On the Component Properties Fluid tab you’ll see the most important checkbox of all, Fluid Mode. That is the one flag that will tell PeopleSoft if the component is Fluid (responsive, dynamic layout) or classic (pixel perfect). Now that you know it’s a single flag, you know that a component can’t be both Fluid UI and Classic at the same time, it’s one or the other. There are some other interesting fields on this page. The Small Form Factor Optimized field tells us whether or not to display this on a small device (think smarphone). Header Toolbar Actions offer standard options that are set at the component level so you have complete control of the components header bar. You’ll notice that the PT_LANDINGPAGE has got some PostBuild PeopleCode. That’s to build the grouplets that are used to launch Fluid UI Pages (more about those later). Probably not a good idea to mess with that code! The next thing to look at is the Page Definition for the PT_LANDINGPAGE component. When you open the page PT_LANDINGPAGE it will look different than anything you’ve ever seen. You’re probably thinking “What’s up with all the group boxes”? That is where Fluid UI is so different. In classic PeopleSoft, you put a button, field, group, any control on a page and that’s where it shows up, no questions asked. With Fluid UI, everything is positioned relative to something else. That’s why there are so many containers (you know them as group boxes). They are UI objects that are used for dynamic positioning. The Fluid Home Page has some special behavior and special settings. The first is in the Web Profile Configuration settings (Main Menu->PeopleTools->Web Profile->Web Profile Configuration from the main menu). There are two checkboxes that control the behavior of Fluid UI. Disable Fluid Mode and Disable Fluid On Desktop. Disable Fluid Mode prevents any Fluid UI component from being run from this installation. This is a web profile setting for users that want to run later versions of PeopleTools but only want to run Classic PeopleSoft pages. The second setting, Disable Fluid On Desktop allows the Fluid UI to be run on mobile devices such as smartphones and tablets, but prevents Fluid UI from running on a desktop computer. Fluid UI settings are also make in My Personalizations (Main Menu->My Personalizations from the Main Menu), in the General Options section. In that section, each user has the choice to determine the home page for their desktop and for tablets. Now that you know the Fluid UI landing page is just a component, and the profile and personalization settings, you should be able to launch one. It’s pretty easy to add a menu using Structure and Content, just make sure the proper security is set up. You’ll have to run a Fluid UI supported browser in order to see it. Latest versions of Chrome, Firefox and IE will do. Check the certification page on MOS for all the details. When you open the first Fluid Landing Page, there’s not much there. Not to worry, we’ll get some content on it soon. Take a moment to navigate around and look at some of the header actions that were set up from the component properties. The home button takes you back to the classic system. You won’t see any notifications and the personalization doesn’t have any content to add. The NavBar icon on the top right has a lot of content, including a Navigator and Classic home. Spend some time looking through what’s available. Stay tuned for more. Next up is adding some content. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • SQL Server script commands to check if object exists and drop it

    - by deadlydog
    Over the past couple years I’ve been keeping track of common SQL Server script commands that I use so I don’t have to constantly Google them.  Most of them are how to check if a SQL object exists before dropping it.  I thought others might find these useful to have them all in one place, so here you go: 1: --=============================== 2: -- Create a new table and add keys and constraints 3: --=============================== 4: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 5: BEGIN 6: CREATE TABLE [dbo].[TableName] 7: ( 8: [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) 9: [ColumnName2] INT NULL, 10: [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') 11: ) 12: 13: -- Add the table's primary key 14: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED 15: ( 16: [ColumnName1], 17: [ColumnName2] 18: ) 19: 20: -- Add a foreign key constraint 21: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY 22: ( 23: [ColumnName1], 24: [ColumnName2] 25: ) 26: REFERENCES [dbo].[Table2Name] 27: ( 28: [OtherColumnName1], 29: [OtherColumnName2] 30: ) 31: 32: -- Add indexes on columns that are often used for retrieval 33: CREATE INDEX IN_ColumnNames ON [dbo].[TableName] 34: ( 35: [ColumnName2], 36: [ColumnName3] 37: ) 38: 39: -- Add a check constraint 40: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) 41: END 42: 43: --=============================== 44: -- Add a new column to an existing table 45: --=============================== 46: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 47: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 48: BEGIN 49: ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) 50: 51: -- Add a description extended property to the column to specify what its purpose is. 52: EXEC sys.sp_addextendedproperty @name=N'MS_Description', 53: @value = N'Add column comments here, describing what this column is for.' , 54: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 55: @level1name = N'TableName', @level2type=N'COLUMN', 56: @level2name = N'ColumnName' 57: END 58: 59: --=============================== 60: -- Drop a table 61: --=============================== 62: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 63: BEGIN 64: DROP TABLE [dbo].[TableName] 65: END 66: 67: --=============================== 68: -- Drop a view 69: --=============================== 70: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') 71: BEGIN 72: DROP VIEW [dbo].[ViewName] 73: END 74: 75: --=============================== 76: -- Drop a column 77: --=============================== 78: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 79: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 80: BEGIN 81: 82: -- If the column has an extended property, drop it first. 83: IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', 84: N'TableName', N'COLUMN', N'ColumnName') 85: BEGIN 86: EXEC sys.sp_dropextendedproperty @name=N'MS_Description', 87: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 88: @level1name = N'TableName', @level2type=N'COLUMN', 89: @level2name = N'ColumnName' 90: END 91: 92: ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] 93: END 94: 95: --=============================== 96: -- Drop Primary key constraint 97: --=============================== 98: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' 99: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') 100: BEGIN 101: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] 102: END 103: 104: --=============================== 105: -- Drop Foreign key constraint 106: --=============================== 107: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' 108: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') 109: BEGIN 110: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] 111: END 112: 113: --=============================== 114: -- Drop Unique key constraint 115: --=============================== 116: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 117: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') 118: BEGIN 119: ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] 120: END 121: 122: --=============================== 123: -- Drop Check constraint 124: --=============================== 125: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' 126: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') 127: BEGIN 128: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] 129: END 130: 131: --=============================== 132: -- Drop a column's Default value constraint 133: --=============================== 134: DECLARE @ConstraintName VARCHAR(100) 135: SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id 136: WHERE s.xtype='d' AND c.cdefault=s.id 137: AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') 138: 139: IF @ConstraintName IS NOT NULL 140: BEGIN 141: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 142: END 143: 144: --=============================== 145: -- Example of how to drop dynamically named Unique constraint 146: --=============================== 147: DECLARE @ConstraintName VARCHAR(100) 148: SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 149: WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 150: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') 151: 152: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 153: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) 154: BEGIN 155: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 156: END 157: 158: --=============================== 159: -- Check for and drop a temp table 160: --=============================== 161: IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName 162: 163: --=============================== 164: -- Drop a stored procedure 165: --=============================== 166: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND 167: ROUTINE_NAME = 'StoredProcedureName') 168: BEGIN 169: DROP PROCEDURE [dbo].[StoredProcedureName] 170: END 171: 172: --=============================== 173: -- Drop a UDF 174: --=============================== 175: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND 176: ROUTINE_NAME = 'UDFName') 177: BEGIN 178: DROP FUNCTION [dbo].[UDFName] 179: END 180: 181: --=============================== 182: -- Drop an Index 183: --=============================== 184: IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') 185: BEGIN 186: DROP INDEX TableName.IndexName 187: END 188: 189: --=============================== 190: -- Drop a Schema 191: --=============================== 192: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') 193: BEGIN 194: EXEC('DROP SCHEMA SchemaName') 195: END And here’s the same code, just not in the little code view window so that you don’t have to scroll it.--=============================== -- Create a new table and add keys and constraints --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN CREATE TABLE [dbo].[TableName]  ( [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) [ColumnName2] INT NULL, [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') ) -- Add the table's primary key ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED ( [ColumnName1],  [ColumnName2] ) -- Add a foreign key constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY ( [ColumnName1],  [ColumnName2] ) REFERENCES [dbo].[Table2Name]  ( [OtherColumnName1],  [OtherColumnName2] ) -- Add indexes on columns that are often used for retrieval CREATE INDEX IN_ColumnNames ON [dbo].[TableName] ( [ColumnName2], [ColumnName3] ) -- Add a check constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) END --=============================== -- Add a new column to an existing table --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) -- Add a description extended property to the column to specify what its purpose is. EXEC sys.sp_addextendedproperty @name=N'MS_Description',  @value = N'Add column comments here, describing what this column is for.' ,  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END --=============================== -- Drop a table --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN DROP TABLE [dbo].[TableName] END --=============================== -- Drop a view --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') BEGIN DROP VIEW [dbo].[ViewName] END --=============================== -- Drop a column --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN -- If the column has an extended property, drop it first. IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', N'TableName', N'COLUMN', N'ColumnName') BEGIN EXEC sys.sp_dropextendedproperty @name=N'MS_Description',  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] END --=============================== -- Drop Primary key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] END --=============================== -- Drop Foreign key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] END --=============================== -- Drop Unique key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') BEGIN ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] END --=============================== -- Drop Check constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] END --=============================== -- Drop a column's Default value constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id WHERE s.xtype='d' AND c.cdefault=s.id  AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') IF @ConstraintName IS NOT NULL BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Example of how to drop dynamically named Unique constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS  WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Check for and drop a temp table --=============================== IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName --=============================== -- Drop a stored procedure --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND ROUTINE_NAME = 'StoredProcedureName') BEGIN DROP PROCEDURE [dbo].[StoredProcedureName] END --=============================== -- Drop a UDF --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND  ROUTINE_NAME = 'UDFName') BEGIN DROP FUNCTION [dbo].[UDFName] END --=============================== -- Drop an Index --=============================== IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') BEGIN DROP INDEX TableName.IndexName END --=============================== -- Drop a Schema --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') BEGIN EXEC('DROP SCHEMA SchemaName') END

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • CodePlex Daily Summary for Sunday, August 03, 2014

    CodePlex Daily Summary for Sunday, August 03, 2014Popular ReleasesBoxStarter: Boxstarter 2.4.76: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.GMare: GMare Beta 1.2: Features Added: - Instance painting by holding the alt key down while pressing the left mouse button - Functionality to the binary exporter so that backgrounds from image files can be used - On the binary exporter background information can be edited manually now - Update to the GMare binary read GML script - Game Maker Studio export - Import from GMare project. Multiple options to import desired properties of a .gmpx - 10 undo/redo levels instead of 5 is now the default - New preferences dia...Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...SQL Server Dialog: SQL Server Dialog: Input server, user and password Show folder and file in treeview Customize icon Filter file extension Skip system generate folder and fileAitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityNew ProjectsCreek: Creek is a Collection of many C# Frameworks and my ownSpeaking Speedometer (android): Simple speaking speedometerT125Protocol { Alpha version }: implement T125 Protocol for communicate with a mainframe.Unix Time: This library provides a System.UnixTime as a new Type providing conversion between Unix Time and .NET DateTime.

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Unable to connect to Wireless after installing Ubuntu 12.10

    - by Moulik
    I am using Asus U56E laptop and after installing Ubuntu 12.10 alongside Windows 8, I am unable to connect to the Wireless. I have been trying to solve this problem since two weeks and couldn't solve it. Please help. Any answer would be appreciated. Here are some command-line results. lspci -v | grep -iA 7 network ubuntu@ubuntu:~$ lspci -v | grep -iA 7 network 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) Subsystem: Intel Corporation Centrino Wireless-N + WiMAX 6150 BGN Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at de800000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi lsmod | grep iwlwifi ubuntu@ubuntu:~$ lsmod | grep iwlwifi iwlwifi 386826 0 mac80211 539908 1 iwlwifi cfg80211 206566 2 iwlwifi,mac80211 ubuntu@ubuntu:~$ dmesg | grep iwlwifi [ 57.846261] iwlwifi: Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 57.846264] iwlwifi: Copyright(c) 2003-2012 Intel Corporation [ 57.846336] iwlwifi 0000:02:00.0: >pci_resource_len = 0x00002000 [ 57.846338] iwlwifi 0000:02:00.0: >pci_resource_base = ffffc90000c7c000 [ 57.846341] iwlwifi 0000:02:00.0: >HW Revision ID = 0x67 [ 57.846438] iwlwifi 0000:02:00.0: >irq 52 for MSI/MSI-X [ 59.558335] iwlwifi 0000:02:00.0: >loaded firmware version 41.28.5.1 build 33926 [ 59.558514] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUG disabled [ 59.558516] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUGFS enabled [ 59.558517] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TRACING enabled [ 59.558519] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [ 59.558520] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_P2P disabled [ 59.558522] iwlwifi 0000:02:00.0: >Detected Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN, REV=0x84 [ 59.558583] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 59.569083] iwlwifi 0000:02:00.0: >device EEPROM VER=0x557, CALIB=0x6 [ 59.569085] iwlwifi 0000:02:00.0: >Device SKU: 0x150 [ 59.569087] iwlwifi 0000:02:00.0: >Valid Tx ant: 0x1, Valid Rx ant: 0x3 [ 59.569100] iwlwifi 0000:02:00.0: >Tunable channels: 13 802.11bg, 0 802.11a channels [ 70.208469] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.208648] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 [ 70.366319] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.366470] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 sudo lshw -c network ubuntu@ubuntu:~$ sudo lshw -c network *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:84:99:c4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.5.0-17-generic firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 54:04:a6:2b:6a:ef capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) ifconfig ubuntu@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 54:04:a6:2b:6a:ef UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:176 errors:0 dropped:0 overruns:0 frame:0 TX packets:176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14368 (14.3 KB) TX bytes:14368 (14.3 KB) wlan0 Link encap:Ethernet HWaddr 40:25:c2:84:99:c4 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) iwconfig ubuntu@ubuntu:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off iwlist scan ubuntu@ubuntu:~$ iwlist scan eth0 Interface doesn't support scanning. lo Interface doesn't support scanning. wlan0 No scan results nm-tool ubuntu@ubuntu:~$ nm-tool NetworkManager Tool State: disconnected - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 54:04:A6:2B:6A:EF Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: disconnected Default: no HW Address: 40:25:C2:84:99:C4 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points hypeness2: Infra, 00:21:29:DA:08:4F, Freq 2462 MHz, Rate 54 Mb/s, Strength 42 WPA love: Infra, 68:7F:74:17:02:66, Freq 2412 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 DIRECT-MwSCX-3400Pamela: Infra, 02:15:99:A3:3F:AC, Freq 2412 MHz, Rate 54 Mb/s, Strength 22 WPA2 router: Infra, 1C:AF:F7:D6:76:F3, Freq 2417 MHz, Rate 54 Mb/s, Strength 20 WPA2 wing: Infra, E8:40:F2:34:E4:F7, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 132LINKSYS: Infra, 00:1A:70:80:1F:E9, Freq 2437 MHz, Rate 54 Mb/s, Strength 57 WEP VMITTAL: Infra, E0:46:9A:3C:F0:C4, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 WEP HP-Print-10-LaserJet 1025: Infra, 7C:E9:D3:7E:F8:10, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 ACNBB: Infra, 00:26:75:22:A6:2F, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 SATKAIVAL: Infra, 00:18:E7:CE:69:A6, Freq 2412 MHz, Rate 54 Mb/s, Strength 69 WPA WPA2 hypeness: Infra, B8:E6:25:24:C3:B1, Freq 2437 MHz, Rate 54 Mb/s, Strength 54 WPA WPA2 CSNetwork: Infra, BC:14:01:58:C5:88, Freq 2437 MHz, Rate 54 Mb/s, Strength 25 WPA WPA2 tharma: Infra, BC:14:01:E2:06:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 15 WPA WPA2 Active2.4: Infra, 10:6F:3F:0E:F3:8E, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 ACNBB: Infra, 00:26:75:58:4E:7A, Freq 2437 MHz, Rate 54 Mb/s, Strength 85 KO: Infra, BC:14:01:2E:AF:A8, Freq 2452 MHz, Rate 54 Mb/s, Strength 22 WPA WPA2 FEAR: Infra, 00:18:4D:C0:BC:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA Pamela: Infra, BC:14:01:52:F6:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 bvrk2: Infra, 78:CD:8E:7B:3C:79, Freq 2457 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 BELL030: Infra, D8:6C:E9:17:AF:09, Freq 2462 MHz, Rate 54 Mb/s, Strength 22 WPA2 Desai: Infra, 00:1D:7E:52:FB:C5, Freq 2437 MHz, Rate 54 Mb/s, Strength 14 WEP Sritharan: Infra, BC:14:01:E5:59:78, Freq 2462 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 PFN: Infra, 00:13:10:8B:CF:45, Freq 2437 MHz, Rate 54 Mb/s, Strength 19 WEP rfkill list all ubuntu@ubuntu:~$ rfkill list all 0: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wimax: WiMAX Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no so these are some more results sudo modprobe -r iwlwifi ubuntu@ubuntu:~$ sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1 ubuntu@ubuntu:~$ sudo modprobe iwlwifi 11n_disable=1 echo "blacklist asus_wmi" | sudo tee -a /etcmodprobe.d/blacklist.conf ubuntu@ubuntu:~$ echo "blacklist asus_wmi" | sudo tee -a /etc/modprobe.d/blacklist.conf blacklist asus_wmi echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf ubuntu@ubuntu:~$ echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf options iwlwifi 11n_disable=1 sudo modprobe -rfv iwlwifi ubuntu@ubuntu:~$ sudo modprobe -rfv iwlwifi rmmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko sudo modprobe -v iwlwifi ubuntu@ubuntu:~$ sudo modprobe -v iwlwifi insmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko 11n_disable=1

    Read the article

  • Finding the groups of a user in WLS with OPSS

    - by user12587121
    How to find the group memberships for a user from a web application running in Weblogic server ?  This is useful for building up the profile of the user for security purposes for example. WLS as a container offers an identity store service which applications can access to query and manage identities known to the container.  This article for example shows how to recover the groups of the current user, but how can we find the same information for an arbitrary user ? It is the Oracle Platform for Securtiy Services (OPSS) that looks after the identity store in WLS and so it is in the OPSS APIs that we can find the way to recover this information. This is explained in the following documents.  Starting from the FMW 11.1.1.5 book list, with the Security Overview document we can see how WLS uses OPSS: Proceeding to the more detailed Application Security document, we find this list of useful references for security in FMW. We can follow on into the User/Role API javadoc. The Application Security document explains how to ensure that the identity store is configured appropriately to allow the OPSS APIs to work.  We must verify that the jps-config.xml file where the application  is deployed has it's identity store configured--look for the following elements in that file: <serviceProvider type="IDENTITY_STORE" name="idstore.ldap.provider" class="oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider">             <description>LDAP-based IdentityStore Provider</description>  </serviceProvider> <serviceInstance name="idstore.ldap" provider="idstore.ldap.provider">             <property name="idstore.config.provider" value="oracle.security.jps.wls.internal.idstore.WlsLdapIdStoreConfigProvider"/>             <property name="CONNECTION_POOL_CLASS" value="oracle.security.idm.providers.stdldap.JNDIPool"/></serviceInstance> <serviceInstanceRef ref="idstore.ldap"/> The document contains a code sample for using the identity store here. Once we have the identity store reference we can recover the user's group memberships using the RoleManager interface:             RoleManager roleManager = idStore.getRoleManager();            SearchResponse grantedRoles = null;            try{                System.out.println("Retrieving granted WLS roles for user " + userPrincipal.getName());                grantedRoles = roleManager.getGrantedRoles(userPrincipal, false);                while( grantedRoles.hasNext()){                      Identity id = grantedRoles.next();                      System.out.println("  disp name=" + id.getDisplayName() +                                  " Name=" + id.getName() +                                  " Principal=" + id.getPrincipal() +                                  "Unique Name=" + id.getUniqueName());                     // Here, we must use WLSGroupImpl() to build the Principal otherwise                     // OES does not recognize it.                      retSubject.getPrincipals().add(new WLSGroupImpl(id.getPrincipal().getName()));                 }            }catch(Exception ex) {                System.out.println("Error getting roles for user " + ex.getMessage());                ex.printStackTrace();            }        }catch(Exception ex) {            System.out.println("OESGateway: Got exception instantiating idstore reference");        } This small JDeveloper project has a simple servlet that executes a request for the user weblogic's roles on executing a get on the default URL.  The full code to recover a user's goups is in the getSubjectWithRoles() method in the project.

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Ubuntu 12.04 + Wifi not working

    - by user171154
    i'm having problems connecting over wireless. At the moment, I'm using wicd. It seems to get stuck on "Verifying AP association...". Without wicd I can get the connection up and ping the Net - but if I take eth0 down (ifconfig eth0 down), my wireless goes away too (same result if I unplug the wire instead). wicd is the only way I can bring eth0 back (which is the main reason I'm using it) - ifconfig eth0 and/or ifup eth0 do not re-enable the connection (I just discovered it leaves out the gateway. Adding the gateway back in re-enables the connection including wifi; I didn't want to delete the info about wicd above in case it gives someone an idea.) Doing it manually, despite the errors (which it would be nice to also resolve) - allows me to ping the outside world: ifup wlan0 ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument ssh stop/waiting ssh start/running, process 17336 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_req=1 ttl=43 time=48.8 ms 64 bytes from 8.8.8.8: icmp_req=2 ttl=43 time=47.9 ms 64 bytes from 8.8.8.8: icmp_req=3 ttl=43 time=48.7 ms 64 bytes from 8.8.8.8: icmp_req=4 ttl=43 time=53.2 ms --- 8.8.8.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 47.975/49.711/53.235/2.063 ms # iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"TPLINK" Mode:Managed Frequency:2.427 GHz Access Point: 64:66:xx:xx:xx:22 Bit Rate=108 Mb/s Tx-Power=27 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=70/70 Signal level=-39 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:3 Missed beacon:0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 01 serial: f0:7d:68:c1:b4:13 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-67-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:dfbf0000-dfbfffff ip route default via 192.168.0.1 dev eth0 default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 (For the record, I have no idea what the 169.254.0.0 address is doing there.) uname -a 3.2.0-67-generic-pae #101-Ubuntu SMP Tue Jul 15 18:04:54 UTC 2014 i686 i686 i386 GNU/Linux lshw -C network *-network description: Ethernet interface product: NetXtreme BCM5751 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 01 serial: 00:11:11:59:fc:09 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=5751-v3.23a ip=192.168.0.102 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:16 memory:dfcf0000-dfcfffff *-network description: Wireless interface product: AR5418 Wireless Network Adapter [AR5008E 802.11(a)bgn] (PCI-Express) vendor: Qualcomm Atheros physical id: 0 /etc/network/interfaces # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback source /etc/network/interfaces.eth0 source /etc/network/interfaces.wlan0 /etc/network/interfaces.eth0 #Main Interface auto eth0 iface eth0 inet static address 192.168.0.102 netmask 255.255.255.0 gateway 192.168.0.1 /etc/network/interfaces.wlan0 auto wlan0 iface wlan0 inet static address 192.168.0.12 gateway 192.168.0.1 dns-nameservers 192.168.0.1 8.8.8.8 netmask 255.255.255.0 wpa-driver wext wpa-ssid TPLINK wpa-ap-scan 1 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk dca1badb5fd4e9axxx4xxdaaxxfa91xx610bxx6a7d57ef67af9809dxx6af42e39 /etc/wpa_supplicant.conf ctrl_interface=/var/run/wpa_supplicant network={ ssid="TPLINK" psk="my password" key_mgmt=WPA-PSK proto=RSN pairwise=CCMP group=CCMP } ifdown eth0 ifdown: interface eth0 not configured ifconfig eth0 Link encap:Ethernet HWaddr 00:11:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:213690 errors:0 dropped:0 overruns:0 frame:0 TX packets:155266 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:220057808 (220.0 MB) TX bytes:21137696 (21.1 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196412 errors:0 dropped:0 overruns:0 frame:0 TX packets:196412 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270697 (153.2 MB) TX bytes:153270697 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11335 errors:0 dropped:0 overruns:0 frame:0 TX packets:7287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2563290 (2.5 MB) TX bytes:855746 (855.7 KB) ifconfig eth0 down ifconfig eth0 Link encap:Ethernet HWaddr 00:xx:xx:xx:xx:09 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::211:11ff:fe59:fc09/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:192 (192.0 B) TX bytes:94 (94.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:196418 errors:0 dropped:0 overruns:0 frame:0 TX packets:196418 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:153270871 (153.2 MB) TX bytes:153270871 (153.2 MB) wlan0 Link encap:Ethernet HWaddr f0:7d:xx:xx:xx:13 inet addr:192.168.0.12 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f27d:68ff:fec1:b413/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11359 errors:0 dropped:0 overruns:0 frame:0 TX packets:7293 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2565482 (2.5 MB) TX bytes:856363 (856.3 KB) ip route default via 192.168.0.1 dev wlan0 metric 100 169.254.0.0/16 dev wlan0 scope link metric 1000 192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.12 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.102 ping -I wlan0 -c 4 8.8.8.8 PING 8.8.8.8 (8.8.8.8) from 192.168.0.12 wlan0: 56(84) bytes of data. --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3024ms ping -I eth0 -c 3 router PING router (192.168.0.1) from 192.168.0.102 eth0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2015ms ping -I wlan0 -c 3 router PING router (192.168.0.1) from 192.168.0.12 wlan0: 56(84) bytes of data. --- router ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms Let me know if you need more info. Thank you in advance.

    Read the article

  • Wireless networks are not detected at start up in Ubuntu 12.04

    - by Kanhaiya Mishra
    I have recently (three four days ago) installed Ubuntu 12.04 via windows installer i.e. wubi.exe. After the installation completed wireless and Ethernet were both working well. But after restart wireless networks didn't show up while in the network manager both networking and wireless were enabled. Though sometimes after boot it did show the networks available but very rarely. So I went through various posts regarding wireless issues in Ubuntu 12.04 and tried so many things but ended up in nothing satisfactory. I have Broadcom 4313 LAN network controller and brcmsmac driver. Then relying on some suggestions I tried to install bcm-wl driver but couldn't install due to some error in jockeyl.log file. Then i tried fresh installation of the same driver but still could resolve the startup issues with wireless. Then again I reinstalled Ubuntu inside windows using wubi installer. This time again same problem occurred after boot. But this time I successfully installed wl driver before disturbing file-system files of Ubuntu. But again the same issue. This time I noticed some new things: If I inserted Ethernet/LAN cable before startup then wireless networks are available and of course LAN(wired) networks also work. but if i don't plug in cable before startup and then plug it after startup then it didn't detect Ethernet network neither wireless. So I haven't noticed it before that LAN along with wifi also doesn't work after startup. But if i suspend the session and make it sleep and again login then it worked. I tried it every time that WLAN worked perfectly. But still i m unable to resolve that startup problem. Each time i boot first I have to suspend it once then only networks are available. It irritates me each time i reboot/boot my lappy. So please help out of this problem. Any ideas/help regarding this issue would be highly appreciated. Some of the commands that i run gave following results: # lspci 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 12) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 06) 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 04:00.0 Ethernet controller: Atheros Communications Inc. AR8152 v1.1 Fast Ethernet (rev c1) ff:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) ff:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) ff:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) ff:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) ff:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) ff:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) # sudo lshw -C network *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: eth1 version: 01 serial: 70:f1:a1:49:b6:ab width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 ip=192.168.1.7 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff *-network description: Ethernet interface product: AR8152 v1.1 Fast Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c1 serial: b8:ac:6f:6b:f7:4a capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI firmware=N/A latency=0 link=no multicast=yes port=twisted pair resources: irq:44 memory:f0400000-f043ffff ioport:2000(size=128) # lsmod | grep wl wl 2568210 0 lib80211 14381 2 lib80211_crypt_tkip,wl # sudo iwlist eth1 scanning eth1 Scan completed : Cell 01 - Address: 30:46:9A:85:DA:9A ESSID:"BH DASHIR 2" Mode:Managed Frequency:2.462 GHz (Channel 11) Quality:4/5 Signal level:-60 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD7F0050F204104A00011010440001021041000100103B000103104700109AFE7D908F8E2D381860668BA2E8D8771021000D4E4554474541522C20496E632E10230009574752363134763130102400095747523631347631301042000538333235381054000800060050F204000110110009574752363134763130100800020084 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Cell 02 - Address: C0:3F:0E:EB:45:14 ESSID:"BH DASHIR 3" Mode:Managed Frequency:2.462 GHz (Channel 11) Quality:2/5 Signal level:-71 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD7F0050F204104A00011010440001021041000100103B00010310470010F3C9BBE499D140540F530E7EBEDE2F671021000D4E4554474541522C20496E632E10230009574752363134763130102400095747523631347631301042000538333235381054000800060050F204000110110009574752363134763130100800020084 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s; 6 Mb/s; 9 Mb/s 12 Mb/s; 48 Mb/s Cell 03 - Address: A0:21:B7:A8:2F:C0 ESSID:"BH DASHIR 4" Mode:Managed Frequency:2.422 GHz (Channel 3) Quality:1/5 Signal level:-86 dBm Noise level:-98 dBm IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK IE: Unknown: DD8B0050F204104A0001101044000102103B0001031047001000000000000010000000A021B7A82FC01021000D4E6574676561722C20496E632E10230009574E523130303076321024000456324831104200046E6F6E651054000800060050F20400011011001B574E5231303030763228576972656C6573732041502D322E344729100800020086103C000103 Encryption key:on Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s; 36 Mb/s 48 Mb/s; 54 Mb/s

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >