Search Results

Search found 1939 results on 78 pages for 'closing'.

Page 72/78 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • C# in Depth, Third Edition by Jon Skeet, Manning Publications Co. Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/10/24/c-in-depth-third-edition-by-jon-skeet-manning-publications.aspx I started reading this ebook on September 28, 2013, the same day it was sent my way by Manning Publications Co. for review while it still being fresh off the press. So 1st thing – thanks to Manning for this opportunity and a free copy of this must have on every C# developer’s desk book! Several hours ago I finished reading this book (well, except a for a large portion of its quite lengthy appendix). I jumped writing this review right away while still being full of emotions and impressions from reading it thoroughly and running code examples. Before I go any further I would like say that I used to program on various platforms using various languages starting with the Mainframe and ending on Windows, and I gradually shifted toward dealing with databases more than anything, however it happened with me to program in C# 1 a lot when it was first released and then some C# 2 with a big leap in between to C# 5. So my perception and experience reading this book may differ from yours. Also what I want to tell is somewhat funny that back then, knowing some Java and seeing C# 1 released, initially made me drawing a parallel that it is a copycat language, how wrong was I… Interestingly, Jon programs in Java full time, but how little it was mentioned in the book! So more on the book: Be informed, this is not a typical “Recipes”, “Cookbook” or any set of ready solutions, it is rather targeting mature, advanced developers who do not only know how to use a number of features, but are willing to understand how the language is operating “under the hood”. I must state immediately, at the same time I am glad the author did not go into the murky depths of the MSIL, so this is a very welcome decision on covering a modern language as C# for me, thank you Jon! Frankly, not all was that rosy regarding the tone and structure of the book, especially the the first half or so filled me with several negative and positive emotions overpowering each other. To expand more on that, some statements in the book appeared to be bias to me, or filled with pre-justice, it started to look like it had some PR-sole in it, but thankfully this was all gone toward the end of the 1st third of the book. Specifically, the mention on the C# language popularity, Java is the #1 language as per https://sites.google.com/site/pydatalog/pypl/PyPL-PopularitY-of-Programming-Language (many other sources put C at the top which I highly doubt), also many interesting functional languages as Clojure and Groovy appeared and gained huge traction which run on top of Java/JVM whereas C# does not enjoy such a situation. If we want to discuss the popularity in general and say how fast a developer can find a new job that pays well it would be indeed the very Java, C++ or PHP, never C#. Or that phrase on language preference as a personal issue? We choose where to work or we are chosen because of a technology used at a given software shop, not vice versa. The book though it technically very accurate with valid code, concise examples, but I wish the author would give more concrete, real-life examples on where each feature should be used, not how. Another point to realize before you get the book is that it is almost a live book which started to be written when even C# 3 wasn’t around so a lot of ground is covered (nearly half of the book) on the pre-C# 3 feature releases so if you already have a solid background in the previous releases and do not plan to upgrade, perhaps half of the book can be skipped, otherwise this book is surely highly recommended. Alas, for me it was a hard read, most of it. It was not boring (well, only may be two times), it was just hard to grasp some concepts, but do not get me wrong, it did made me pause, on several occasions, and made me read and re-read a page or two. At times I even wondered if I have any IQ at all (LOL). Be prepared to read A LOT on generics, not that they are widely used in the field (I happen to work as a consultant and went thru a lot of code at many places) I can tell my impression is the developers today in best case program using examples found at OpenStack.com. Also unlike the Java world where having the most recent version is nearly mandated by the OSS most companies on the Microsoft platform almost never tempted to upgrade the .Net version very soon and very often. As a side note, I was glad to see code recently that included a nullable variable (myvariable? notation) and this made me smile, besides, I recommended that person this book to expand her knowledge. The good things about this book is that Jon maintains an active forum, prepared code snippets and even a small program (Snippy) that is happy to run the sample code saving you from writing any plumbing code. A tad now on the C# language itself – it sure enjoyed a wonderful road toward perfection and a very high adoption, especially for ASP development. But to me all the recent features that made this statically typed language more dynamic look strange. Don’t we have F#? Which supposed to be the dynamic language? Why do we need to have a hybrid language? Now the developers live their lives in dualism of the static and dynamic variables! And LINQ to SQL, it is covered in depth, but wasn’t it supposed to be dropped? Also it seems that very little is being added, and at a slower pace, e.g. Roslyn will come in late 2014 perhaps, and will be probably the only main feature. Again, it is quite hard to read this book as various chapters, C# versions mentioned every so often only if I only could remember what was covered exactly where! So the fact it has so many jumps/links back and forth I recommend the ebook format to make the navigations easier to perform and I do recommend using software that allows bookmarking, also make sure you have access to plenty of coffee and pizza (hey, you probably know this joke – who a programmer is) ! In terms of closing, if you stuck at C# 1 or 2 level, it is time to embrace the power of C# 5! Finally, to compliment Manning, this book unlike from any other publisher so far, was the only one as well readable (put it formatted) on my tablet as in Adobe Reader on a laptop.

    Read the article

  • What does it mean when a User-Agent has another User-Agent inside it?

    - by Erx_VB.NExT.Coder
    Basically, sometimes the user-agent will have its normal user-agent displayed, then at the end it will have teh "User-Agent: " tag displayed, and right after it another user-agent is shown. Sometimes, the second user-agent is just appended to the first one without the "User-Agent: " tag. Here are some samples I've seen: The first few contain the "User-Agent: " tag in the middle somewhere, and I've changed its font to make it easier to to see. Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Trident/4.0; GTB6; User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6; MRA 5.10 (build 5339); User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 1.1.4322; .NET CLR 2.0.50727) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; User-agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152) Here are some without the "User-Agent: " tag in the middle, but just two user agents that seem stiched together. Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); .NET CLR 3.5.30729) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6; IPMS/6568080A-04A5AD839A9; TCO_20090713170733; Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1); InfoPath.2) Now, just to add a few notes to this. I understand that the "User-Agent: " tag is normally a header, and what follows a typical "User-Agent: " string sequence is the actual user agent that is sent to servers etc, but normally the "User-Agent: " string should not be part of the actual user agent, that is more like the pre-fix or a tag indicating that what follows will be the actual user agent. Additionally, I may have thought, hey, these are just two user agents pasted together, but on closer inspection, you realize that they are not. On all of these dual user agent listings, if you look at the opening bracket "(" just before the "compatible" keyword, you realize the pair to that bracket ")" is actually at the very end, the end of the second user agent. So, the first user agents closing bracket ")" never occurs before the second user agent begins, it's always right at the end, and therefore, the second user agent is more like one of the features of the first user agent, like: "Trident/4.0" or "GTB6" etc etc... The other thing to note that the second user agent is always MSIE 6.0 (Internet Explorer 6.0), interesting. What I had initially thought was it's some sort of Virtual Machine displaying the browser in use & the browser that is installed, but then I thought, what'd be the point in that? Finally, right now, I am thinking, it's probably soem sort of "Compatibility View" type thing, where even if MSIE 7.0 or 8.0 is installed, when my hypothetical the "Display In Internet Explorer 6.0" mode is turned on, the user agent changes to something like this. That being, IE 8.0 is installed, but is rendering everything as IE 6.0 would. Is there or was there such a feature in Internet Explorer? Am I on to something here? What are your thoughts on this? If you have any other ideas, please feel free to let us know. At the moment, I'm just trying to understand if these are valid User Agents, or if they are invalid. In a list of about 44,000 User Agents, I've seen this type of Dual User Agent about 400 times. I've closely inspected 40 of them, and every single one had MSIE 6.0 as the "second" user agent (and the first user agent a higher version of MSIE, such as 7 or 8). This was true for all except one, where both user agents were MSIE 8.0, here it is: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; Mozilla/4.0 (compatible; MSIE 8.0; Win32; GMX); GTB0.0) This occured once in my 40 "close" inspections. I've estimated the 400 in 44,000 by taking a sample of the first 4,400 user agents, and finding 40 of these in the MSIE/Windows user agents, and extrapolated that to estimate 40. There were also similar things occuring for non MSIE user agents where there were two Mozilla's in one user agent, the non MSIE ones would probably add another 30% on top of the ones I've noted. I can show you samples of them if anyone would like. There we have it, this is where I'm at, what do you guys think?

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • Use Those Extra Mouse Buttons to Increase Efficiency

    - by Mark Virtue
    Did you know that the most commonly used mouse actions are clicking a window’s “Close” button (the X in the top-right corner), and clicking the “Back” button (in a browser and various other programs)?  How much time do you spend every day locating the Close button or the Back button with your mouse so that you can click on them?  And what about that mouse you’re using – how many buttons does it have, besides the two main ones?  Most mouses these days have at least four (including the scroll-wheel, which a lot of people don’t realize is also a button as well).  Why not assign those extra buttons to your most common mouse actions, and save yourself a bundle of mousing-around time every day? If your mouse was manufactured by one of the “premium” mouse manufacturers (Microsoft, Logitech, etc), it almost certain came with driver software to allow you to customize your mouse’s controls and take advantage of your mouse’s special features.  Microsoft, for example, provides driver software called IntelliPoint (link below), while Logitech provides SetPoint.  It’s possible that your mouse has some extra buttons but doesn’t come with its own driver software (the author is using a Microsoft Bluetooth Notebook Mouse 5000, which amazingly is not supported by the Microsoft IntelliPoint software!).  If your mouse falls into this category, you can use a marvelous free product called X-Mouse Button Control, from Highresolution Enterprises (link below).  It provides a truly amazing array of mouse configuration options, including assigning actions to buttons on a per-application basis. Once X-Mouse Button Control is downloaded, its setup process is quite straightforward. Once downloaded, you can start the program via Start / Highresolution Enterprises / X-Mouse Button Control.  You will find the program’s icon in the system tray: Right-click on the icon and select Setup from the pop-up menu.  The program’s configuration window appears: It’s extremely unlikely that we will want to change the functionality of our mouse’s two main buttons (left and right), so instead we’ll look at the rest of the options on the right side of the window.  The Middle Button refers to either the third, middle button (found on some old mouses), or the pressing of the wheel itself, as a button (if you didn’t know you could press your wheel like a button, try it out now).  Mouse Button 4 and Mouse Button 5 usually refer to the extra buttons found on the side of the mouse, often near your thumb. So what can we use these extra mouse buttons for?  Well, clearly Close and Back are two obvious candidates.  Each of these can be found by selecting them from the drop-down menu next to each button field: Once the two options are chosen, the window will look something like this: If you’re not interested in choosing Back or Close, you may like to try some of the other options in the list, including: Cut, Copy and Paste Undo Show the Desktop Next/Previous track (for media playback) Open any program Simulate any keystroke or combination of keystrokes ….and many other options.  Explore the drop-down list to see them all. You may decide, for example, that closing the current document (as opposed to the current program) would be a good use for Mouse Button 5.  In other words, we need to simulate the keypress of Ctrl-F4.  Let’s see how we achieve this. First we select Simulated Keystrokes from the drop-down list: The Simulated Keystrokes window opens: The instructions on the page are pretty comprehensive.  If you want to simulate the Ctrl-F4 keystroke, you need to type {CTRL}{F4} into the box: …and then click OK. Assigning Actions to Buttons on a Per-Application Basis One of the most powerful features of X-Mouse Button Control is the ability to assign actions to buttons on a per-application basis.  This means that if we have a particular program open, then our mouse will behave differently – our buttons will do different things. For example, when we have Windows Media Player open, for example, we may wish to have buttons assigned to Play/Pause, Next track and Previous track, as well as changing the volume with the mouse!  This is easy with X-Mouse Button Control.  We start by opening Windows Media Player.  This makes the next step easier.  Then we return to X-Mouse Button Control and add a new “configuration”.  This is done by clicking the Add button: A window opens containing a list of all running programs, including our recently opened Windows Media Player: We select Windows Media Player and click OK.  A new, blank “configuration” is created: We repeat the earlier steps to assign buttons to Play/Pause, Next track and Previous track, and assign scrolling the wheel to alter the volume:   To save all our changes and close the window, we click Apply. Now spend a few minutes thinking of all the applications you use the most, and what are the most common simple tasks you perform in each of those applications.  Those tasks are then perfect candidates for per-application button assignments. There are many more configuration options and capabilities of X-Mouse Button Control – too many to list here.  We encourage you to spend a bit of time exploring the Setup window.  Then, most important of all, don’t forget to use your new mouse buttons!  Get into the habit of using them, and then after a while you’ll start to wonder how you ever tolerated the laborious, tedious, time-consuming process of actually locating each window’s Close button… Download X-Mouse Button Control Highresolution Enterprise Similar Articles Productive Geek Tips Add Specialized Toolbar Buttons to Firefox the Easy WayBoost Your Mouse Pointing Accuracy in WindowsMake Mouse Navigation Faster in WindowsVista Style Popup Previews for Firefox TabsStupid Geek Tricks: Using the Quick Zoom Feature in Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses Mashpedia is a Real-time Encyclopedia

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • Creating A SharePoint Parent/Child List Relationship&ndash; SharePoint 2010 Edition

    - by Mark Rackley
    Hey blog readers… It has been almost 2 years since I posted my most read blog on creating a Parent/Child list relationship in SharePoint 2007: Creating a SharePoint List Parent / Child Relationship - Out of the Box And then a year ago I improved on my method and redid the blog post… still for SharePoint 2007: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX Since then many of you have been asking me how to get this to work in SharePoint 2010, and frankly I have just not had time to look into it. I wish I could have jumped into this sooner, but have just recently began to look at it. Well.. after all this time I have actually come up with two solutions that work, neither of them are as clean as I’d like them to be, but I wanted to get something in your hands that you can start using today. Hopefully in the coming weeks and months I’ll be able to improve upon this further and give you guys some better options. For the most part, the process is identical to the 2007 process, but you have probably found out that the list view web parts in 2010 behave differently, and getting the Parent ID to your new child form can be a pain in the rear (at least that’s what I’ve discovered). Anyway, like I said, I have found a couple of solutions that work. If you know of a better one, please let us know as it bugs me that this not as eloquent as my 2007 implementation. Getting on the same page First thing I’d recommend is recreating this blog: Creating a SharePoint List Parent/Child Relationship – VIDEO REMIX in SharePoint 2010… There are some vague differences, but it’s basically the same…  Here’s a quick video of me doing this in SP 2010: Creating Lists necessary for this blog post Now that you have the lists created, lets set up the New Time form to use a QueryString variable to populate the Parent ID field: Creating parameters in Child’s new item form to set parent ID Did I talk fast enough through both of those videos? Hopefully by now that stuff is old hat to you, but I wanted to make sure everyone could get on the same page.  Okay… let’s get started. Solution 1 – XSLTListView with Javascript This solution is the more elegant of the two, however it does require the use of a little javascript.  The other solution does not use javascript, but it also doesn’t use the pretty new SP 2010 pop-ups.  I’ll let you decide which you like better. The basic steps of this solution are: Inserted a Related Item View Insert a ContentEditorWebPart Insert script in ContentEditorWebPart that pulls the ID from the Query string and calls the method to insert a new item on the child entry form Hide the toolbar from data view to remove “add new item” link. Again, you don’t HAVE to use a CEWP, you could just put the javascript directly in the page using SPD.  Anyway, here is how I did it: Using Related Item View / JavaScript Here’s the JavaScript I used in my Content Editor Web Part: <script type="text/javascript"> function NewTime() { // Get the Query String values and split them out into the vals array var vals = new Object(); var qs = location.search.substring(1, location.search.length); var args = qs.split("&"); for (var i=0; i < args.length; i++) { var nameVal = args[i].split("="); var temp = unescape(nameVal[1]).split('+'); nameVal[1] = temp.join(' '); vals[nameVal[0]] = nameVal[1]; } var issueID = vals["ID"]; //use this to bring up the pretty pop up NewItem2(event,"http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID); //use this to open a new window //window.location="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID=" + issueID; } </script> Solution 2 – DataFormWebPart and exact same 2007 Process This solution is a little more of a hack, but it also MUCH more close to the process we did in SP 2007. So, if you don’t mind not having the pretty pop-up and prefer the comforts of what you are used to, you can give this one a try.  The basics steps are: Insert a DataFormWebPart instead of the List Data View Create a Parameter on DataFormWebPart to store “ID” Query String Variable Filter DataFormWebPart using Parameter Insert a link at bottom of DataForm Web part that points to the Child’s new item form and passes in the Parent Id using the Parameter. See.. like I told you, exact same process as in 2007 (except using the DataFormWeb Part). The DataFormWebPart also requires a lot more work to make it look “pretty” but it’s just table rows and cells, and can be configured pretty painlessly.  Here is that video: Using DataForm Web Part One quick update… if you change the link in this solution from: <tr> <td><a href="http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}">Click here to create new item...</a> </td> </tr> to: <tr> <td> <a href="javascript:NewItem2(event,'http://sp2010dev:1234/Lists/Time/NewForm.aspx?IssueID={$IssueIDParam}');">Click here to create new item...</a> </td> </tr> It will open up in the pretty pop up and act the same as solution one… So… both Solutions will now behave the same to the end user. Just depends on which you want to implement. That’s all for now… Remember in both solutions when you have them working, you can make the “IssueID” invisible to users by using the “ms-hidden” class (it’s my previous blog post on the subject up there). That’s basically all there is to it! No pithy or witty closing this time… I am sorry it took me so long to dive into this and I hope your questions are answered. As I become more polished myself I will try to come up with a cleaner solution that will make everyone happy… As always, thanks for taking the time to stop by.

    Read the article

  • Why won't fetchmail work all of a sudden?

    - by SirCharlo
    I ran a chmod 777 * on my home folder. (I know, I know. I'll never do it again.) Ever since then, fetchmail seems to be broken. I use it to fetch mail from an Exchange 2003 mailbox through DAVMail and OWA. The problem is that fetchmail complains about an "expunge mismatch" whenever I get a new message. It deletes the message from the Exchange mailbox, yet it never forwards it. There seems to be a problem somwhere along the mail processing, but I haven't been able to pinpoint where. Any help would be appreciated. Here are the relevant config files. ~/fetchmailrc: set no bouncemail defaults: antispam -1 batchlimit 100 poll localhost with protocol imap and port 1143 user domain\\user password Password is root no rewrite mda "/usr/bin/procmail -f %F -d %T"; ~/procmailrc: :0 * ^Subject.*ack | expand | sed -e 's/[ ]*$//g' | sed -e 's/^/ /' > /usr/local/nagios/libexec/mail_acknowledgement ~/.forward: | "/usr/bin/procmail" And here is the output when I run fetchmail -f /root/.fetchmailrc -vv: fetchmail: WARNING: Running as root is discouraged. Old UID list from localhost: <empty> Scratch list of UIDs: <empty> fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll started Trying to connect to 127.0.0.1/1143...connected. fetchmail: IMAP< * OK [CAPABILITY IMAP4REV1 AUTH=LOGIN] IMAP4rev1 DavMail 3.9.7-1870 server ready fetchmail: IMAP> A0001 CAPABILITY fetchmail: IMAP< * CAPABILITY IMAP4REV1 AUTH=LOGIN fetchmail: IMAP< A0001 OK CAPABILITY completed fetchmail: Protocol identified as IMAP4 rev 1 fetchmail: GSSAPI error gss_inquire_cred: Unspecified GSS failure. Minor code may provide more information fetchmail: GSSAPI error gss_inquire_cred: fetchmail: No suitable GSSAPI credentials found. Skipping GSSAPI authentication. fetchmail: If you want to use GSSAPI, you need credentials first, possibly from kinit. fetchmail: IMAP> A0002 LOGIN "domain\\user" * fetchmail: IMAP< A0002 OK Authenticated fetchmail: selecting or re-polling default folder fetchmail: IMAP> A0003 SELECT "INBOX" fetchmail: IMAP< * 1 EXISTS fetchmail: IMAP< * 1 RECENT fetchmail: IMAP< * OK [UIDVALIDITY 1] fetchmail: IMAP< * OK [UIDNEXT 344] fetchmail: IMAP< * FLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk) fetchmail: IMAP< * OK [PERMANENTFLAGS (\Answered \Deleted \Draft \Flagged \Seen $Forwarded Junk)] fetchmail: IMAP< A0003 OK [READ-WRITE] SELECT completed fetchmail: 1 message waiting after first poll fetchmail: IMAP> A0004 EXPUNGE fetchmail: IMAP< A0004 OK EXPUNGE completed fetchmail: 1 message waiting after expunge fetchmail: IMAP> A0005 SEARCH UNSEEN fetchmail: IMAP< * SEARCH 1 fetchmail: 1 is unseen fetchmail: IMAP< A0005 OK SEARCH completed fetchmail: 1 is first unseen 1 message for domain\user at localhost. fetchmail: IMAP> A0006 FETCH 1 RFC822.SIZE fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.SIZE 1350) fetchmail: IMAP< A0006 OK FETCH completed fetchmail: IMAP> A0007 FETCH 1 RFC822.HEADER fetchmail: IMAP< * 1 FETCH (UID 343 RFC822.HEADER {1350} reading message domain\user@localhost:1 of 1 (1350 header octets) fetchmail: about to deliver with: /usr/bin/procmail -f '[email protected]' -d 'root' # fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< Bonne journ=E9e.. fetchmail: IMAP< fetchmail: IMAP< Company Name fetchmail: IMAP< My Name fetchmail: IMAP< IT fetchmail: IMAP< Tel: (XXX) XXX-XXXX xXXX fetchmail: IMAP< www.domain.com=20 fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< -----Message d'origine----- fetchmail: IMAP< De=A0: User [mailto:[email protected]]=20 fetchmail: IMAP< Envoy=E9=A0: 2 juillet 2012 15:50 fetchmail: IMAP< =C0=A0: Informatique fetchmail: IMAP< Objet=A0: PROBLEM: photo fetchmail: IMAP< fetchmail: IMAP< Notification Type: PROBLEM fetchmail: IMAP< Author:=20 fetchmail: IMAP< Comment:=20 fetchmail: IMAP< fetchmail: IMAP< Host: Photos fetchmail: IMAP< Hostname: photo fetchmail: IMAP< State: DOWN fetchmail: IMAP< Address: XXX.XX.X.XX fetchmail: IMAP< fetchmail: IMAP< Date/Time: Mon Jul 2 15:49:38 EDT 2012 fetchmail: IMAP< fetchmail: IMAP< Info: CRITICAL - XXX.XX.X.XX: rta nan, lost 100% fetchmail: IMAP< fetchmail: IMAP< fetchmail: IMAP< ) fetchmail: IMAP< A0007 OK FETCH completed fetchmail: IMAP> A0008 FETCH 1 BODY.PEEK[TEXT] fetchmail: IMAP< * 1 FETCH (UID 343 BODY[TEXT] {539} (539 body octets) ******************************* fetchmail: IMAP< ) fetchmail: IMAP< A0008 OK FETCH completed flushed fetchmail: IMAP> A0009 STORE 1 +FLAGS (\Seen \Deleted) fetchmail: IMAP< * 1 FETCH (UID 343 FLAGS (\Seen \Deleted)) fetchmail: IMAP< * 1 EXPUNGE fetchmail: IMAP< A0009 OK STORE completed fetchmail: IMAP> A0010 EXPUNGE fetchmail: IMAP< A0010 OK EXPUNGE completed fetchmail: mail expunge mismatch (0 actual != 1 expected) fetchmail: IMAP> A0011 LOGOUT fetchmail: IMAP< * BYE Closing connection fetchmail: IMAP< A0011 OK LOGOUT completed fetchmail: client/server synchronization error while fetching from domain\user@localhost fetchmail: 6.3.19 querying localhost (protocol IMAP) at Tue 03 Jul 2012 09:46:36 AM EDT: poll completed Merged UID list from localhost: <empty> fetchmail: Query status=7 (ERROR) fetchmail: normal termination, status 7

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • Is RTD Stateless or Stateful?

    - by [email protected]
    Yes.   A stateless service is one where each request is an independent transaction that can be processed by any of the servers in a cluster.  A stateful service is one where state is kept in a server's memory from transaction to transaction, thus necessitating the proper routing of requests to the right server. The main advantage of stateless systems is simplicity of design. The main advantage of stateful systems is performance. I'm often asked whether RTD is a stateless or stateful service, so I wanted to clarify this issue in depth so that RTD's architecture will be properly understood. The short answer is: "RTD can be configured as a stateless or stateful service." The performance difference between stateless and stateful systems can be very significant, and while in a call center implementation it may be reasonable to use a pure stateless configuration, a web implementation that produces thousands of requests per second is practically impossible with a stateless configuration. RTD's performance is orders of magnitude better than most competing systems. RTD was architected from the ground up to achieve this performance. Features like automatic and dynamic compression of prediction models, automatic translation of metadata to machine code, lack of interpreted languages, and separation of model building from decisioning contribute to achieving this performance level. Because  of this focus on performance we decided to have RTD's default configuration work in a stateful manner. By being stateful RTD requests are typically handled in a few milliseconds when repeated requests come to the same session. Now, those readers that have participated in implementations of RTD know that RTD's architecture is also focused on reducing Total Cost of Ownership (TCO) with features like automatic model building, automatic time windows, automatic maintenance of database tables, automatic evaluation of data mining models, automatic management of models partitioned by channel, geography, etcetera, and hot swapping of configurations. How do you reconcile the need for a low TCO and the need for performance? How do you get the performance of a stateful system with the simplicity of a stateless system? The answer is that you make the system behave like a stateless system to the exterior, but you let it automatically take advantage of situations where being stateful is better. For example, one of the advantages of stateless systems is that you can route a message to any server in a cluster, without worrying about sending it to the same server that was handling the session in previous messages. With an RTD stateful configuration you can still route the message to any server in the cluster, so from the point of view of the configuration of other systems, it is the same as a stateless service. The difference though comes in performance, because if the message arrives to the right server, RTD can serve it without any external access to the session's state, thus tremendously reducing processing time. In typical implementations it is not rare to have high percentages of messages routed directly to the right server, while those that are not, are easily handled by forwarding the messages to the right server. This architecture usually provides the best of both worlds with performance and simplicity of configuration.   Configuring RTD as a pure stateless service A pure stateless configuration requires session data to be persisted at the end of handling each and every message and reloading that data at the beginning of handling any new message. This is of course, the root of the inefficiency of these configurations. This is also the reason why many "stateless" implementations actually do keep state to take advantage of a request coming back to the same server. Nevertheless, if the implementation requires a pure stateless decision service, this is easy to configure in RTD. The way to do it is: Mark every Integration Point to Close the session at the end of processing the message In the Session entity persist the session data on closing the session In the session entity check if a persisted version exists and load it An excellent solution for persisting the session data is Oracle Coherence, which provides a high performance, distributed cache that minimizes the performance impact of persisting and reloading the session. Alternatively, the session can be persisted to a local database. An interesting feature of the RTD stateless configuration is that it can cope with serializing concurrent requests for the same session. For example, if a web page produces two requests to the decision service, these requests could come concurrently to the decision services and be handled by different servers. Most stateless implementation would have the two requests step onto each other when saving the state, or fail one of the messages. When properly configured, RTD will make one message wait for the other before processing.   A Word on Context Using the context of a customer interaction typically significantly increases lift. For example, offer success in a call center could double if the context of the call is taken into account. For this reason, it is important to utilize the contextual information in decision making. To make the contextual information available throughout a session it needs to be persisted. When there is a well defined owner for the information then there is no problem because in case of a session restart, the information can be easily retrieved. If there is no official owner of the information, then RTD can be configured to persist this information.   Once again, RTD provides flexibility to ensure high performance when it is adequate to allow for some loss of state in the rare cases of server failure. For example, in a heavy use web site that serves 1000 pages per second the navigation history may be stored in the in memory session. In such sites it is typical that there is no OLTP that stores all the navigation events, therefore if an RTD server were to fail, it would be possible for the navigation to that point to be lost (note that a new session would be immediately established in one of the other servers). In most cases the loss of this navigation information would be acceptable as it would happen rarely. If it is desired to save this information, RTD would persist it every time the visitor navigates to a new page. Note that this practice is preferred whether RTD is configured in a stateless or stateful manner.  

    Read the article

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Unable to boot Ubuntu 13.10 (nVidia GTX 770m and Intel HD 4600)

    - by Raziel Gonzalez
    Ever since I bought this laptop I've been trying to install Ubuntu on it. It came with W8 preinstalled. Up to this point, I've been able to boot in UEFI mode with a black screen. I can tell it's trying to use the nVidia card (there's a led on the computer, depending on the color you can tell which GPU is using) and if I press crtl+alt+F1 I can go to console mode. Taking this advantage I tried to install bumblebee and after a successful install the led that indicates which GPU is being used change, indicating that it switched to the Intel HD 4600 graphics. After the installation I tried to initiate the graphic interface (startx) with no success. Xorg.0.log shows the error: [ 3706.779] X.Org X Server 1.14.3 Release Date: 2013-09-12 [ 3706.782] X Protocol Version 11, Revision 0 [ 3706.783] Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu [ 3706.783] Current Operating System: Linux ubuntu 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 [ 3706.783] Kernel command line: BOOT_IMAGE=/casper/vmlinuz.efi file=/cdrom/preseed/ubuntu.seed boot=casper nomodeset -- [ 3706.785] Build Date: 15 October 2013 09:23:37AM [ 3706.786] xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 3706.786] Current version of pixman: 0.30.2 [ 3706.788] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 3706.788] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 3706.791] (==) Log file: "/var/log/Xorg.0.log", Time: Sat Nov 2 12:28:52 2013 [ 3706.792] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 3706.792] (==) No Layout section. Using the first Screen section. [ 3706.792] (==) No screen section available. Using defaults. [ 3706.792] (**) |-->Screen "Default Screen Section" (0) [ 3706.792] (**) | |-->Monitor "<default monitor>" [ 3706.792] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 3706.792] (==) Automatically adding devices [ 3706.792] (==) Automatically enabling devices [ 3706.792] (==) Automatically adding GPU devices [ 3706.792] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 3706.792] Entry deleted from font path. [ 3706.792] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 3706.792] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 3706.792] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 3706.792] (II) Loader magic: 0x7ff680918d20 [ 3706.792] (II) Module ABI versions: [ 3706.792] X.Org ANSI C Emulation: 0.4 [ 3706.792] X.Org Video Driver: 14.1 [ 3706.792] X.Org XInput driver : 19.1 [ 3706.792] X.Org Server Extension : 7.0 [ 3706.793] (--) PCI:*(0:0:2:0) 8086:0416:1462:10e8 rev 6, Mem @ 0xf7400000/4194304, 0xb0000000/268435456, I/O @ 0x0000f000/64 [ 3706.793] (II) Open ACPI successful (/var/run/acpid.socket) [ 3706.794] Initializing built-in extension Generic Event Extension [ 3706.795] Initializing built-in extension SHAPE [ 3706.796] Initializing built-in extension MIT-SHM [ 3706.797] Initializing built-in extension XInputExtension [ 3706.797] Initializing built-in extension XTEST [ 3706.798] Initializing built-in extension BIG-REQUESTS [ 3706.799] Initializing built-in extension SYNC [ 3706.799] Initializing built-in extension XKEYBOARD [ 3706.800] Initializing built-in extension XC-MISC [ 3706.801] Initializing built-in extension SECURITY [ 3706.802] Initializing built-in extension XINERAMA [ 3706.802] Initializing built-in extension XFIXES [ 3706.803] Initializing built-in extension RENDER [ 3706.804] Initializing built-in extension RANDR [ 3706.804] Initializing built-in extension COMPOSITE [ 3706.805] Initializing built-in extension DAMAGE [ 3706.806] Initializing built-in extension MIT-SCREEN-SAVER [ 3706.806] Initializing built-in extension DOUBLE-BUFFER [ 3706.807] Initializing built-in extension RECORD [ 3706.807] Initializing built-in extension DPMS [ 3706.808] Initializing built-in extension X-Resource [ 3706.809] Initializing built-in extension XVideo [ 3706.809] Initializing built-in extension XVideo-MotionCompensation [ 3706.810] Initializing built-in extension SELinux [ 3706.811] Initializing built-in extension XFree86-VidModeExtension [ 3706.811] Initializing built-in extension XFree86-DGA [ 3706.812] Initializing built-in extension XFree86-DRI [ 3706.812] Initializing built-in extension DRI2 [ 3706.812] (II) "glx" will be loaded by default. [ 3706.812] (WW) "xmir" is not to be loaded by default. Skipping. [ 3706.812] (II) LoadModule: "dri2" [ 3706.812] (II) Module "dri2" already built-in [ 3706.812] (II) LoadModule: "glamoregl" [ 3706.813] (II) Loading /usr/lib/xorg/modules/libglamoregl.so [ 3706.813] (II) Module glamoregl: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.2.901, module version = 0.5.1 [ 3706.813] ABI class: X.Org ANSI C Emulation, version 0.4 [ 3706.813] (II) LoadModule: "glx" [ 3706.813] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 3706.813] (II) Module glx: vendor="X.Org Foundation" [ 3706.813] compiled for 1.14.3, module version = 1.0.0 [ 3706.813] ABI class: X.Org Server Extension, version 7.0 [ 3706.813] (==) AIGLX enabled [ 3706.814] Loading extension GLX [ 3706.814] (==) Matched intel as autoconfigured driver 0 [ 3706.814] (==) Matched vesa as autoconfigured driver 1 [ 3706.814] (==) Matched modesetting as autoconfigured driver 2 [ 3706.814] (==) Matched fbdev as autoconfigured driver 3 [ 3706.814] (==) Assigned the driver to the xf86ConfigLayout [ 3706.814] (II) LoadModule: "intel" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/intel_drv.so [ 3706.814] (II) Module intel: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.3, module version = 2.99.904 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "vesa" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 3706.814] (II) Module vesa: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 2.3.2 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "modesetting" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 3706.814] (II) Module modesetting: vendor="X.Org Foundation" [ 3706.814] compiled for 1.14.1, module version = 0.8.0 [ 3706.814] Module class: X.Org Video Driver [ 3706.814] ABI class: X.Org Video Driver, version 14.1 [ 3706.814] (II) LoadModule: "fbdev" [ 3706.814] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 3706.815] (II) Module fbdev: vendor="X.Org Foundation" [ 3706.815] compiled for 1.14.1, module version = 0.4.3 [ 3706.815] Module class: X.Org Video Driver [ 3706.815] ABI class: X.Org Video Driver, version 14.1 [ 3706.815] (II) intel: Driver for Intel(R) Integrated Graphics Chipsets: i810, i810-dc100, i810e, i815, i830M, 845G, 854, 852GM/855GM, 865G, 915G, E7221 (i915), 915GM, 945G, 945GM, 945GME, Pineview GM, Pineview G, 965G, G35, 965Q, 946GZ, 965GM, 965GME/GLE, G33, Q35, Q33, GM45, 4 Series, G45/G43, Q45/Q43, G41, B43, HD Graphics, HD Graphics 2000, HD Graphics 3000, HD Graphics 2500, HD Graphics 4000, HD Graphics P4000, HD Graphics 4600, HD Graphics 5000, HD Graphics P4600/P4700, Iris(TM) Graphics 5100, HD Graphics 4400, HD Graphics 4200, Iris(TM) Pro Graphics 5200 [ 3706.815] (II) VESA: driver for VESA chipsets: vesa [ 3706.815] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 3706.815] (II) FBDEV: driver for framebuffer: fbdev [ 3706.815] (--) using VT number 7 [ 3706.819] (WW) Falling back to old probe method for modesetting [ 3706.819] (EE) open /dev/dri/card0: No such file or directory [ 3706.819] (WW) Falling back to old probe method for fbdev [ 3706.819] (II) Loading sub module "fbdevhw" [ 3706.819] (II) LoadModule: "fbdevhw" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 3706.819] (II) Module fbdevhw: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 0.0.2 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "vbe" [ 3706.819] (II) LoadModule: "vbe" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libvbe.so [ 3706.819] (II) Module vbe: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.1.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) Loading sub module "int10" [ 3706.819] (II) LoadModule: "int10" [ 3706.819] (II) Loading /usr/lib/xorg/modules/libint10.so [ 3706.819] (II) Module int10: vendor="X.Org Foundation" [ 3706.819] compiled for 1.14.3, module version = 1.0.0 [ 3706.819] ABI class: X.Org Video Driver, version 14.1 [ 3706.819] (II) VESA(0): initializing int10 [ 3706.820] (EE) VESA(0): V_BIOS address 0x0 out of range [ 3706.820] (II) UnloadModule: "vesa" [ 3706.820] (II) UnloadSubModule: "int10" [ 3706.820] (II) Unloading int10 [ 3706.820] (II) UnloadSubModule: "vbe" [ 3706.820] (II) Unloading vbe [ 3706.820] (EE) Screen(s) found, but none have a usable configuration. [ 3706.820] (EE) Fatal server error: [ 3706.820] (EE) no screens found(EE) [ 3706.820] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 3706.820] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 3706.820] (EE) [ 3706.827] (EE) Server terminated with error (1). Closing log file. I also saved the dsmeg output to see if it can be of any help. In order to be able to get to this stage I had to boot with nomodeset option and removed quiet and splash. Anyone got this same error? Any guidance? I've tried other linux distros and so far the only one that is able to boot is Opensuse 12.3 without any issues (but only when I switch to legacy mode instead of UEFI).

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • How can unrealscript halt event handler execution after an arbitrary number of lines with no return or error?

    - by Dan Cowell
    I have created a class that extends TcpLink and is instantiated in a custom Kismet Sequence Action. It is being instantiated correctly and is making the GET HTTP request that I need it to (I have checked my access log in apache) and Apache is responding to the request with the appropriate content. The problem I have is that I'm using the event receive mode and it appears that somehow the handler for the Opened event is halted after a specific number of lines of code have executed. Here is my code for the Opened event: event Opened() { // A connection was established WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } As you can see, a number of the Broadcast calls have been commented out. Initially, only the lines up to the Broadcast containing "[DNomad_TcpLinkClient] Sent request: " were being executed and none of the Broadcasts were commented out. After commenting out that line, the next Broadcast was successful and so on and so forth. As a test, I commented out the very first Broadcast to see if the connection closing had any effect: // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); Upon doing that, an additional Broadcast at the end of the function executed. Thus the inference that there is an upper limit to the number of lines executed. Additionally, my ReceivedText handler is never called, despite Apache returning the correct HTTP 200 response with a body. My working hypothesis is that somehow after the Sequence Action finishes executing the garbage collector cleans up the TcpLinkClient instance. My biggest source of confusion with that is how on earth it does it during the execution of an event handler. Has anyone ever seen anything like this before? My full TcpLinkClient class is below: /* * TcpLinkClient based on an example usage of the TcpLink class by Michiel 'elmuerte' Hendriks for Epic Games, Inc. * */ class DNomad_TcpLinkClient extends TcpLink; var PlayerController PC; var string TargetHost; var int TargetPort; var string path; var string requesttext; var string userId; var string apartmentId; var string statusCode; var string responseData; event PostBeginPlay() { super.PostBeginPlay(); } function DoTcpLinkRequest(string uid, string id) //removes having to send a host { userId = uid; apartmentId = id; Resolve(targethost); } function string GetStatus() { return statusCode; } event Resolved( IpAddr Addr ) { // The hostname was resolved succefully WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] "$TargetHost$" resolved to "$ IpAddrToString(Addr)); // Make sure the correct remote port is set, resolving doesn't set // the port value of the IpAddr structure Addr.Port = TargetPort; //dont comment out this log because it rungs the function bindport WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Bound to port: "$ BindPort() ); if (!Open(Addr)) { WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Open failed"); } } event ResolveFailed() { WorldInfo.Game.Broadcast(self, "[TcpLinkClient] Unable to resolve "$TargetHost); // You could retry resolving here if you have an alternative // remote host. //send failed message to scaleform UI //JunHud(JunPlayerController(PC).myHUD).JunMovie.CallSetHTML("Failed"); } event Opened() { // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } event Closed() { // In this case the remote client should have automatically closed // the connection, because we requested it in the HTTP request. WorldInfo.Game.Broadcast(self, "Connection closed."); // After the connection was closed we could establish a new // connection using the same TcpLink instance. } event ReceivedText( string Text ) { WorldInfo.Game.Broadcast(self, "Received Text: "$Text); //we dont want the header info, so we split the string after two new lines Text = Split(Text, chr(13)$chr(10)$chr(13)$chr(10), true); WorldInfo.Game.Broadcast(self, "Split Text: "$Text); statusCode = Text; } event ReceivedLine( string Line ) { WorldInfo.Game.Broadcast(self, "Received Line: "$Line); } event ReceivedBinary( int Count, byte B[255] ) { WorldInfo.Game.Broadcast(self, "Received Binary of length: "$Count); } defaultproperties { TargetHost="127.0.0.1" TargetPort=80 //default for HTTP LinkMode=MODE_Text ReceiveMode=RMODE_Event path = "dnomad/datafeed.php" userId = "0"; apartmentId = "0"; statusCode = ""; send = false; }

    Read the article

  • Agile Testing Days 2012 – Day 3 – Agile or agile?

    - by Chris George
    Another early start for my last Lean Coffee of the conference, and again it was not wasted. We had some really interesting discussions around how to determine what test automation is useful, if agile is not faster, why do it? and a rather existential discussion on whether unicorns exist! First keynote of the day was entitled “Fast Feedback Teams” by Ola Ellnestam. Again this relates nicely to the releasing faster talk on day 2, and something that we are looking at and some teams are actively trying. Introducing the notion of feedback, Ola describes a game he wrote for his eldest child. It was a simple game where every time he clicked a button, it displayed “You’ve Won!”. He then changed it to be a Win-Lose-Win-Lose pattern and watched the feedback from his son who then twigged the pattern and got his younger brother to play, alternating turns… genius! (must do that with my children). The idea behind this was that you need that feedback loop to learn and progress. If you are not getting the feedback you need to close that loop. An interesting point Ola made was to solve problems BEFORE writing software. It may be that you don’t have to write anything at all, perhaps it’s a communication/training issue? Perhaps the problem can be solved another way. Writing software, although it’s the business we are in, is expensive, and this should be taken into account. He again mentions frequent releases, and how they should be made as soon as stuff is ready to be released, don’t leave stuff on the shelf cause it’s not earning you anything, money or data. I totally agree with this and it’s something that we will be aiming for moving forwards. “Exceptions, Assumptions and Ambiguity: Finding the truth behind the story” by David Evans started off very promising by making references to ‘Grim up North’ referring to the north of England. Not sure it was appreciated by most of the audience, but it made me laugh! David explained how there are always risks associated with exceptions, giving the example of a one-way road near where he lives, with an exception sign giving rights to coaches to go the wrong way. Therefore you could merrily swing around the corner of the one way road straight into a coach! David showed the danger in making assumptions with lyrical quotes from Lola by The Kinks “I’m glad I’m a man, and so is Lola” and with a picture of a toilet flush that needed instructions to operate the full and half flush. With this particular flush, you pulled the handle all the way down to half flush, and half way down to full flush! hmmm, a bit of a crappy user experience methinks! Then through a clever use of a passage from the Jabberwocky, David then went onto show how mis-translation/ambiguity is the can completely distort the original meaning of something, and this is a real enemy of software development. This was all helping to demonstrate that the term Story is often heavily overloaded in the Agile world, and should really be stripped back to what it is really for, stating a business problem, and offering a technical solution. Therefore a story could be worded as “In order to {make some improvement}, we will { do something}”. The first ‘in order to’ statement is stakeholder neutral, and states the problem through requesting an improvement to the software/process etc. The second part of the story is the verb, the doing bit. So to achieve the ‘improvement’ which is not currently true, we will do something to make this true in the future. My PM is very interested in this, and he’s observed some of the problems of overloading stories so I’m hoping between us we can use some of David’s suggestions to help clarify our stories better. The second keynote of the day (and our last) proved to be the most entertaining and exhausting of the conference for me. “The ongoing evolution of testing in agile development” by Scott Barber. I’ve never had the pleasure of seeing Scott before… OMG I would love to have even half of the energy he has! What struck me during this presentation was Scott’s explanation of how testing has become the role/job that it is (largely) today, and how this has led to the need for ‘methodologies’ to make dev and test work! The argument that we should be trying to converge the roles again is a very valid one, and one that a couple of the teams at work are actively doing with great results. Making developers as responsible for quality as testers is something that has been lost over the years, but something that we are now striving to achieve. The idea that we (testers) should be testing experts/specialists, not testing ‘union members’, supports this idea so the entire team works on all aspects of a feature/product, with the ‘specialists’ taking the lead and advising/coaching the others. This leads to better propagation of information around the team, a greater holistic understanding of the project and it allows the team to continue functioning if some of it’s members are off sick, for example. Feeling somewhat drained from Scott’s keynote (but at the same time excited that alot of the points he raised supported actions we are taking at work), I headed into my last presentation for Agile Testing Days 2012 before having to make my way to Tegel to catch the flight home. “Thinking and working agile in an unbending world” with Pete Walen was a talk I was not going to miss! Having spoken to Pete several times during the past few days, I was looking forward to hearing what he was going to say, and I was not disappointed. Pete started off by trying to separate the definitions of ‘Agile’ as in the methodology, and ‘agile’ as in the adjective by pronouncing them the ‘english’ and ‘american’ ways. So Agile pronounced (Ajyle) and agile pronounced (ajul). There was much confusion around what the hell he was talking about, although I thought it was quite clear. Agile – Software development methodology agile – Marked by ready ability to move with quick easy grace; Having a quick resourceful and adaptable character. Anyway, that aside (although it provided a few laughs during the presentation), the point was that many teams that claim to be ‘Agile’ but are not, in fact, ‘agile’ by nature. Implementing ‘Agile’ methodologies that are so prescriptive actually goes against the very nature of Agile development where a team should anticipate, adapt and explore. Pete made a valid point that very few companies intentionally put up roadblocks to impede work, so if work is being blocked/delayed, why? This is where being agile as a team pays off because the team can inspect what’s going on, explore options and adapt their processes. It is through experimentation (and that means trying and failing as well as trying and succeeding) that a team will improve and grow leading to focussing on what really needs to be done to achieve X. So, that was it, the last talk of our conference. I was gutted that we had to miss the closing keynote from Matt Heusser, as Matt was another person I had spoken too a few times during the conference, but the flight would not wait, and just as well we left when we did because the traffic was a nightmare! My Takeaway Triple from Day 3: Release often and release small – don’t leave stuff on the shelf Keep the meaning of the word ‘agile’ in mind when working in ‘Agile Look at testing as more of a skill than a role  

    Read the article

  • Expectations + Rewards = Innovation

    - by D'Arcy Lussier
    “Innovation” is a heavy word. We regard those that embrace it as “Innovators”. We describe organizations as being “Innovative”. We hold those associated with the word in high regard, even though its dictionary definition is very simple: Introducing something new. What our culture has done is wrapped Innovation in white robes and a gold crown. Innovation is rarely just introducing something new. Innovations and innovators are typically associated with other terms: groundbreaking, genius, industry-changing, creative, leading. Being a true innovator and creating innovations are a big deal, and something companies try to strive for…or at least say they strive for. There’s huge value in being recognized as an innovator in an industry, since the idea is that innovation equates to increased profitability. IBM ran an ad a few years back that showed what their view of innovation is: “The point of innovation is to make actual money.” If the money aspect makes you feel uneasy, consider it another way: the point of innovation is to <insert payoff here>. Companies that innovate will be more successful. Non-profits that innovate can better serve their target clients. Governments that innovate can better provide services to their citizens. True innovation is not easy to come by though. As with anything in business, how well an organization will innovate is reliant on the employees it retains, the expectations placed on those employees, and the rewards available to them. In a previous blog post I talked about one formula: Right Employees + Happy Employees = Productive Employees I want to introduce a new one, that builds upon the previous one: Expectations + Rewards = Innovation  The level of innovation your organization will realize is directly associated with the expectations you place on your staff and the rewards you make available to them. Expectations We may feel uncomfortable with the idea of placing expectations on our staff, mainly because expectation has somewhat of a negative or cold connotation to it: “I expect you to act this way or else!” The problem is in the or-else part…we focus on the negative aspects of failing to meet expectations instead of looking at the positive side. “I expect you to act this way because it will produce <insert benefit here>”. Expectations should not be set to punish but instead be set to ensure quality. At a recent conference I spoke with some Microsoft employees who told me that you have five years from starting with the company to reach a “Senior” level. If you don’t, then you’re let go. The expectation Microsoft placed on their staff is that they should be working towards improving themselves, taking more responsibility, and thus ensure that there is a constant level of quality in the workforce. Rewards Let me be clear: a paycheck is not a reward. A paycheck is simply the employer’s responsibility in the employee/employer relationship. A paycheck will never be the key motivator to drive innovation. Offering employees something over and above their required compensation can spur them to greater performance and achievement. Working in the food service industry, this tactic was used again and again: whoever has the highest sales over lunch will receive a free lunch/gift certificate/entry into a draw/etc. There was something to strive for, to try beyond the baseline of what our serving jobs were. It was through this that innovative sales techniques would be tried and honed, with key servers being top sellers time and time again. At a code camp I spoke at, I was amazed to see that all the employees from one company receive $100 Visa gift cards as a thank you for taking time to speak. Again, offering something over and above that can give that extra push for employees. Rewards work. But what about the fairness angle? In the restaurant example I gave, there were servers that would never win the competition. They just weren’t good enough at selling and never seemed to get better. So should those that did work at performing better and produce more sales for the restaurant not get rewarded because those who weren’t working at performing better might get upset? Of course not! Organizations succeed because of their top performers and those that strive to join their ranks. The Expectation/Reward Graph While the Expectations + Rewards = Innovation formula may seem like a simple mathematics formula, there’s much more going under the hood. In fact there are three different outcomes that could occur based on what you put in as values for Expectations and Rewards. Consider the graph below and the descriptions that follow: Disgruntled – High Expectation, Low Reward I worked at a company where the mantra was “Company First, Because We Pay You”. Even today I still hear stories of how this sentiment continues to be perpetuated: They provide you a paycheck and a means to live, therefore you should always put them as your top priority. Of course, this is a huge imbalance in the expectation/reward equation. Why would anyone willingly meet high expectations of availability, workload, deadlines, etc. when there is no reward other than a paycheck to show for it? Remember: paychecks are not rewards! Instead, you see employees be disgruntled which not only affects the level of production but also the level of quality within an organization. It also means that you see higher turnover. Complacent – Low Expectation, Low Reward Complacency is a systemic problem that typically exists throughout all levels of an organization. With no real expectations or rewards, nobody needs to excel. In fact, those that do try to innovate, improve, or introduce new things into the organization might be shunned or pushed out by the rest of the staff who are just doing things the same way they’ve always done it. The bigger issue for the organization with low/low values is that at best they’ll never grow beyond their current size (and may shrink actually), and at worst will cease to exist. Entitled – Low Expectation, High Reward It’s one thing to say you have the best people and reward them as such, but its another thing to actually have the best people and reward them as such. Organizations with Entitled employees are the former: their organization provides them with all types of comforts, benefits, and perks. But there’s no requirement before the rewards are dolled out, and there’s no short-list of who receives the rewards. Everyone in the company is treated the same and is given equal share of the spoils. Entitlement is actually almost identical with Complacency with one notable difference: just try to introduce higher expectations into an entitled organization! Entitled employees have been spoiled for so long that they can’t fathom having rewards taken from them, or having to achieve specific levels of performance before attaining them. Those running the organization also buy in to the Entitled sentiment, feeling that they must persist the same level of comforts to appease their staff…even though the quality of the employee pool may be suspect. Innovative – High Expectation, High Reward Finally we have the Innovative organization which places high expectations but also provides high rewards. This organization gets it: if you truly want the best employees you need to apply equal doses of pressure and praise. Realize that I’m not suggesting crazy overtime or un-realistic working conditions. I do not agree with the “Glengary-Glenross” method of encouragement. But as anyone who follows sports can tell you, the teams that win are the ones where the coaches push their players to be their best; to achieve new levels of performance that they didn’t know they could receive. And the result for the players is more money, fame, and opportunity. It’s in this environment that organizations can focus on innovation – true innovation that builds the business and allows everyone involved to truly benefit. In Closing Organizations love to use the word “Innovation” and its derivatives, but very few actually do innovate. For many, the term has just become another marketing buzzword to lump in with all the other business terms that get overused. But for those organizations that truly get the value of innovation, they will be the ones surging forward while other companies simply fade into the background. And they will be the organizations that expect more from their employees, and give them their just rewards.

    Read the article

  • Scheduling thread tiles with C++ AMP

    - by Daniel Moth
    This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one. Tiling the extent We know that when we pass a grid (which is just an extent under the covers) to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups). So here is a single-dimensional example: extent<1> e(20); // 20 units in a single dimension with indices from 0-19 grid<1> g(e);      // same as extent tiled_grid<4> tg = g.tile<4>(); …on the 3rd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_grid (a new class in amp.h). Let's move on swiftly to another example, in pictures, this time 2-dimensional: So we start on the left with a grid of a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size. Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_grid, since the parallel_for_each function that we know expects a grid. Tiled parallel_for_each and tiled_index It turns out that we have additional overloads of parallel_for_each that accept a tiled_grid instead of a grid. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index? A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_grid), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things: array_view<int, 2> data(8, 6, p_my_data); parallel_for_each(data.grid.tile<2,2>(), [=] (tiled_index<2,2> t_idx) restrict(direct3d) { /* todo */ }); Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)? If you can't work it out yourselves, the solution follows: t_idx.global       = index<2> (6,3) t_idx.local          = index<2> (0,1) t_idx.tile_origin = index<2> (6,2) t_idx.tile             = index<2> (3,1) Don't move on until you are comfortable with this… the picture really helps, so use it. Tiled Matrix Multiplication Example – part 1 Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?) 01: void MatrixMultiplyTiled_Part1(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M, N, vC); 07: parallel_for_each(c.grid, 08: [=](index<2> idx) restrict(direct3d) { 09: 10: int row = idx[0]; int col = idx[1]; 11: float sum = 0.0f; 12: for(int i = 0; i < W; i++) 13: sum += a(row, i) * b(i, col); 14: c[idx] = sum; 15: }); 16: } To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.grid.extent.size() is divisible by 256, and moreover that c.grid.extent[0] and c.grid.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant). 03: static const int TS = 16; ...then we need to tile the grid to have tiles where each one has 16*16 threads, so we change line 07 to be as follows 07: parallel_for_each(c.grid.tile<TS,TS>(), ...that means that our index now has to be a tiled_index with the same characteristics as the tiled_grid, so we change line 08 08: [=](tiled_index<TS, TS> t_idx) restrict(direct3d) { ...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows 09: index<2> idx = t_idx.global; ...and now this code just works and it is tiled! Closing thoughts on part 1 The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled cannot be divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-tiled model. But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance. In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits… Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Using IIS Logs for Performance Testing with Visual Studio

    - by Tarun Arora
    In this blog post I’ll show you how you can play back the IIS Logs in Visual Studio to automatically generate the web performance tests. You can also download the sample solution I am demo-ing in the blog post. Introduction Performance testing is as important for new websites as it is for evolving websites. If you already have your website running in production you could mine the information available in IIS logs to analyse the dense zones (most used pages) and performance test those pages rather than wasting time testing & tuning the least used pages in your application. What are IIS Logs To help with server use and analysis, IIS is integrated with several types of log files. These log file formats provide information on a range of websites and specific statistics, including Internet Protocol (IP) addresses, user information and site visits as well as dates, times and queries. If you are using IIS 7 and above you will find the log files in the following directory C:\Interpub\Logs\ Walkthrough 1. Download and Install Log Parser from the Microsoft download Centre. You should see the LogParser.dll in the install folder, the default install location is C:\Program Files (x86)\Log Parser 2.2. LogParser.dll gives us a library to query the iis log files programmatically. By the way if you haven’t used Log Parser in the past, it is a is a powerful, versatile tool that provides universal query access to text-based data such as log files, XML files and CSV files, as well as key data sources on the Windows operating system such as the Event Log, the Registry, the file system, and Active Directory. More details… 2. Create a new test project in Visual Studio. Let’s call it IISLogsToWebPerfTestDemo.   3.  Delete the UnitTest1.cs class that gets created by default. Right click the solution and add a project of type class library, name it, IISLogsToWebPerfTestEngine. Delete the default class Program.cs that gets created with the project. 4. Under the IISLogsToWebPerfTestEngine project add a reference to Microsoft.VisualStudio.QualityTools.WebTestFramework – c:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.WebTestFramework.dll LogParser also called MSUtil - c:\users\tarora\documents\visual studio 2010\Projects\IisLogsToWebPerfTest\IisLogsToWebPerfTestEngine\obj\Debug\Interop.MSUtil.dll 5. Right click IISLogsToWebPerfTestEngine project and add a new classes – IISLogReader.cs The IISLogReader class queries the iis logs using the log parser. using System; using System.Collections.Generic; using System.Text; using MSUtil; using LogQuery = MSUtil.LogQueryClassClass; using IISLogInputFormat = MSUtil.COMIISW3CInputContextClassClass; using LogRecordSet = MSUtil.ILogRecordset; using Microsoft.VisualStudio.TestTools.WebTesting; using System.Diagnostics; namespace IisLogsToWebPerfTestEngine { // By making use of log parser it is possible to query the iis log using select queries public class IISLogReader { private string _iisLogPath; public IISLogReader(string iisLogPath) { _iisLogPath = iisLogPath; } public IEnumerable<WebTestRequest> GetRequests() { LogQuery logQuery = new LogQuery(); IISLogInputFormat iisInputFormat = new IISLogInputFormat(); // currently these columns give us suffient information to construct the web test requests string query = @"SELECT s-ip, s-port, cs-method, cs-uri-stem, cs-uri-query FROM " + _iisLogPath; LogRecordSet recordSet = logQuery.Execute(query, iisInputFormat); // Apply a bit of transformation while (!recordSet.atEnd()) { ILogRecord record = recordSet.getRecord(); if (record.getValueEx("cs-method").ToString() == "GET") { string server = record.getValueEx("s-ip").ToString(); string path = record.getValueEx("cs-uri-stem").ToString(); string querystring = record.getValueEx("cs-uri-query").ToString(); StringBuilder urlBuilder = new StringBuilder(); urlBuilder.Append("http://"); urlBuilder.Append(server); urlBuilder.Append(path); if (!String.IsNullOrEmpty(querystring)) { urlBuilder.Append("?"); urlBuilder.Append(querystring); } // You could make substitutions by introducing parameterized web tests. WebTestRequest request = new WebTestRequest(urlBuilder.ToString()); Debug.WriteLine(request.UrlWithQueryString); yield return request; } recordSet.moveNext(); } Console.WriteLine(" That's it! Closing the reader"); recordSet.close(); } } }   6. Connect the dots by adding the project reference ‘IisLogsToWebPerfTestEngine’ to ‘IisLogsToWebPerfTest’. Right click the ‘IisLogsToWebPerfTest’ project and add a new class ‘WebTest1Coded.cs’ The WebTest1Coded.cs inherits from the WebTest class. By overriding the GetRequestMethod we can inject the log files to the IISLogReader class which uses Log parser to query the log file and extract the web requests to generate the web test request which is yielded back for play back when the test is run. namespace IisLogsToWebPerfTest { using System; using System.Collections.Generic; using System.Text; using Microsoft.VisualStudio.TestTools.WebTesting; using Microsoft.VisualStudio.TestTools.WebTesting.Rules; using IisLogsToWebPerfTestEngine; // This class is a coded web performance test implementation, that simply passes // the path of the iis logs to the IisLogReader class which does the heavy // lifting of reading the contents of the log file and converting them to tests. // You could have multiple such classes that inherit from WebTest and implement // GetRequestEnumerator Method and pass differnt log files for different tests. public class WebTest1Coded : WebTest { public WebTest1Coded() { this.PreAuthenticate = true; } public override IEnumerator<WebTestRequest> GetRequestEnumerator() { // substitute the highlighted path with the path of the iis log file IISLogReader reader = new IISLogReader(@"C:\Demo\iisLog1.log"); foreach (WebTestRequest request in reader.GetRequests()) { yield return request; } } } }   7. Its time to fire the test off and see the iis log playback as a web performance test. From the Test menu choose Test View Window you should be able to see the WebTest1Coded test show up. Highlight the test and press Run selection (you can also debug the test in case you face any failures during test execution). 8. Optionally you can create a Load Test by keeping ‘WebTest1Coded’ as the base test. Conclusion You have just helped your testing team, you now have become the coolest developer in your organization! Jokes apart, log parser and web performance test together allow you to save a lot of time by not having to worry about what to test or even worrying about how to record the test. If you haven’t already, download the solution from here. You can take this to the next level by using LogParser to extract the log files as part of an end of day batch to a database. See the usage trends by user this solution over a longer term and have your tests consume the web requests now stored in the database to generate the web performance tests. If you like the post, don’t forget to share … Keep RocKiNg!

    Read the article

  • OpenGL loading functions error [on hold]

    - by Ghilliedrone
    I'm new to OpenGL, and I bought a book on it for beginners. I finished writing the sample code for making a context/window. I get an error on this line at the part PFNWGLCREATECONTEXTATTRIBSARBPROC, saying "Error: expected a ')'": typedef HGLRC(APIENTRYP PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC, HGLRC, const int*); Replacing it or adding a ")" makes it error, but the error disappears when I use the OpenGL headers included in the books CD, which are OpenGL 3.0. I would like a way to make this work with the newest gl.h/wglext.h and without libraries. Here's the rest of the class if it's needed: #include <ctime> #include <windows.h> #include <iostream> #include <gl\GL.h> #include <gl\wglext.h> #include "Example.h" #include "GLWindow.h" typedef HGLRC(APIENTRYP PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC, HGLRC, const int*); PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = NULL; bool GLWindow::create(int width, int height, int bpp, bool fullscreen) { DWORD dwExStyle; //Window Extended Style DWORD dwStyle; //Window Style m_isFullscreen = fullscreen;//Store the fullscreen flag m_windowRect.left = 0L; m_windowRect.right = (long)width; m_windowRect.top = 0L; m_windowRect.bottom = (long)height;//Set bottom to height // fill out the window class structure m_windowClass.cbSize = sizeof(WNDCLASSEX); m_windowClass.style = CS_HREDRAW | CS_VREDRAW; m_windowClass.lpfnWndProc = GLWindow::StaticWndProc; //We set our static method as the event handler m_windowClass.cbClsExtra = 0; m_windowClass.cbWndExtra = 0; m_windowClass.hInstance = m_hinstance; m_windowClass.hIcon = LoadIcon(NULL, IDI_APPLICATION); // default icon m_windowClass.hCursor = LoadCursor(NULL, IDC_ARROW); // default arrow m_windowClass.hbrBackground = NULL; // don't need background m_windowClass.lpszMenuName = NULL; // no menu m_windowClass.lpszClassName = (LPCWSTR)"GLClass"; m_windowClass.hIconSm = LoadIcon(NULL, IDI_WINLOGO); // windows logo small icon if (!RegisterClassEx(&m_windowClass)) { MessageBox(NULL, (LPCWSTR)"Failed to register window class", NULL, MB_OK); return false; } if (m_isFullscreen)//If we are fullscreen, we need to change the display { DEVMODE dmScreenSettings; //Device mode memset(&dmScreenSettings, 0, sizeof(dmScreenSettings)); dmScreenSettings.dmSize = sizeof(dmScreenSettings); dmScreenSettings.dmPelsWidth = width; //Screen width dmScreenSettings.dmPelsHeight = height; //Screen height dmScreenSettings.dmBitsPerPel = bpp; //Bits per pixel dmScreenSettings.dmFields = DM_BITSPERPEL | DM_PELSWIDTH | DM_PELSHEIGHT; if (ChangeDisplaySettings(&dmScreenSettings, CDS_FULLSCREEN) != DISP_CHANGE_SUCCESSFUL) { MessageBox(NULL, (LPCWSTR)"Display mode failed", NULL, MB_OK); m_isFullscreen = false; } } if (m_isFullscreen) //Is it fullscreen? { dwExStyle = WS_EX_APPWINDOW; //Window Extended Style dwStyle = WS_POPUP; //Windows Style ShowCursor(false); //Hide mouse pointer } else { dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE; //Window Exteneded Style dwStyle = WS_OVERLAPPEDWINDOW; //Windows Style } AdjustWindowRectEx(&m_windowRect, dwStyle, false, dwExStyle); //Adjust window to true requested size //Class registered, so now create window m_hwnd = CreateWindowEx(NULL, //Extended Style (LPCWSTR)"GLClass", //Class name (LPCWSTR)"Chapter 2", //App name dwStyle | WS_CLIPCHILDREN | WS_CLIPSIBLINGS, 0, 0, //x, y coordinates m_windowRect.right - m_windowRect.left, m_windowRect.bottom - m_windowRect.top, //Width and height NULL, //Handle to parent NULL, //Handle to menu m_hinstance, //Application instance this); //Pass a pointer to the GLWindow here //Check if window creation failed, hwnd would equal NULL if (!m_hwnd) { return 0; } m_hdc = GetDC(m_hwnd); ShowWindow(m_hwnd, SW_SHOW); UpdateWindow(m_hwnd); m_lastTime = GetTickCount() / 1000.0f; return true; } LRESULT CALLBACK GLWindow::StaticWndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { GLWindow* window = nullptr; //If this is the create message if (uMsg == WM_CREATE) { //Get the pointer we stored during create window = (GLWindow*)((LPCREATESTRUCT)lParam)->lpCreateParams; //Associate the window pointer with the hwnd for the other events to access SetWindowLongPtr(hWnd, GWL_USERDATA, (LONG_PTR)window); } else { //If this is not a creation event, then we should have stored a pointer to the window window = (GLWindow*)GetWindowLongPtr(hWnd, GWL_USERDATA); if (!window) { //Do the default event handling return DefWindowProc(hWnd, uMsg, wParam, lParam); } } //Call our window's member WndProc(allows us to access member variables) return window->WndProc(hWnd, uMsg, wParam, lParam); } LRESULT GLWindow::WndProc(HWND hWnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_CREATE: { m_hdc = GetDC(hWnd); setupPixelFormat(); //Set the version that we want, in this case 3.0 int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 0, 0}; //Create temporary context so we can get a pointer to the function HGLRC tmpContext = wglCreateContext(m_hdc); //Make the context current wglMakeCurrent(m_hdc, tmpContext); //Get the function pointer wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB"); //If this is NULL then OpenGl 3.0 is not supported if (!wglCreateContextAttribsARB) { MessageBox(NULL, (LPCWSTR)"OpenGL 3.0 is not supported", (LPCWSTR)"An error occured", MB_ICONERROR | MB_OK); DestroyWindow(hWnd); return 0; } //Create an OpenGL 3.0 context using the new function m_hglrc = wglCreateContextAttribsARB(m_hdc, 0, attribs); //Delete the temporary context wglDeleteContext(tmpContext); //Make the GL3 context current wglMakeCurrent(m_hdc, m_hglrc); m_isRunning = true; } break; case WM_DESTROY: //Window destroy case WM_CLOSE: //Windows is closing wglMakeCurrent(m_hdc, NULL); wglDeleteContext(m_hglrc); m_isRunning = false; //Stop the main loop PostQuitMessage(0); break; case WM_SIZE: { int height = HIWORD(lParam); //Get height and width int width = LOWORD(lParam); getAttachedExample()->onResize(width, height); //Call the example's resize method } break; case WM_KEYDOWN: if (wParam == VK_ESCAPE) //If the escape key was pressed { DestroyWindow(m_hwnd); } break; default: break; } return DefWindowProc(hWnd, uMsg, wParam, lParam); } void GLWindow::processEvents() { MSG msg; //While there are messages in the queue, store them in msg while (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { //Process the messages TranslateMessage(&msg); DispatchMessage(&msg); } } Here is the header: #pragma once #include <ctime> #include <windows.h> class Example;//Declare our example class class GLWindow { public: GLWindow(HINSTANCE hInstance); //default constructor bool create(int width, int height, int bpp, bool fullscreen); void destroy(); void processEvents(); void attachExample(Example* example); bool isRunning(); //Is the window running? void swapBuffers() { SwapBuffers(m_hdc); } static LRESULT CALLBACK StaticWndProc(HWND wnd, UINT msg, WPARAM wParam, LPARAM lParam); LRESULT CALLBACK WndProc(HWND wnd, UINT msg, WPARAM wParam, LPARAM lParam); float getElapsedSeconds(); private: Example* m_example; //A link to the example program bool m_isRunning; //Is the window still running? bool m_isFullscreen; HWND m_hwnd; //Window handle HGLRC m_hglrc; //Rendering context HDC m_hdc; //Device context RECT m_windowRect; //Window bounds HINSTANCE m_hinstance; //Application instance WNDCLASSEX m_windowClass; void setupPixelFormat(void); Example* getAttachedExample() { return m_example; } float m_lastTime; };

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • Oracle EBS?????(Order->AR)

    - by Pan.Tian
    ???? ??:Order Management > Orders,Returns > Sales Orders ???????,??,????,???? ???????,????,??... ??Book Order,??Book??,????????Status??????“Booked”,???????"Awaiting Shipping",?????????,??????????????? ??:??Book??,????????????,????Shipping Transactions Form,????,?????????Line Status?Ready to Release,Next Step?Pick Release Pick Release ??:Order Management > Shipping > Release Sales Orders > Release Sales Orders Pick Release????(?????????).?Order  Number?????????? Auto Pick Confirm???No Auto Allocate???N Auto Allocate?Auto Pick Confirm??????Yes,???????????,??????No,???Yes??,?????Allocate?Pick Confirm??,??????????? ??????????Pick  Release,”Concurrent“??Pick Release?????Concurrent Request???,"Execute Now"????????Pick Release,??????????????User,??????Concurrent??? Pick Release?????????Pick Release?????Pick Wave??Move Order,??Move Order????????????????????(Staging),????INV??????????? INV_MOVE_ORDER_PUB.CREATE_MOVE_ORDER_HEADER???Move Order??(??Pick Release?????????????:Pick Release Process) ????????,?Pick Release??,?????????????Reservation(??),?????????Soft Reservations,?????????????,????Org?????????? ??:????,Shipping Transaction?Line Status?"Released to Warehouse",Next Step?"Transact Move Order";????????Booked,?????”Awaiting Shipping“? Pick Confirm Pick Confirm(????)????????Transact Move Order????,?Allocate????,?Transact Move Order. ??:Inventory > Move Orders > Transact Move Orders ????,Pick Wave??Tab,????? ??TMO????,??Allocate,Allocate?????????Picking Rule?????,??????Suggestion????,Suggestion?????? MTL_MATERIAL_TRANSACTIONS_TEMP?(?Pending Transactions)? ????Allocate??,??????Allocation????Single,Multiple??None???,Single??, ??????????Suggestion?Transaction??,Multiple???????;None??????Suggestion? ?(????????????????) ????????Transact??Move Order ?Transact??,Inventory Transaction Manager ???Suggestion Transactions(MMTT),???????????????,??????Subinventory??????(Staging)??? Transction???Material Transaction?Form????? ????Reservation??,?Transact??,???????,Reservation????????,????Sub,locator???? ??:????,Shipping Transaction?Line Status?"Staged/Pick Confirmed",Next Step?"Ship Confirm/Close Trip Stop";????????Booked,??????”Picked“? Ship Confirm Deliveries ??:Order Management > Shipping > Transactions ???Delivery??,??Ship Confirm(????),????Pick Release???,????Autocreate Delivery,???????Define Shipping Parameters????????,??shipping parameters???????,?????????Ship Confirm?????Action->Auto-create Deliveries. Delivery????????????????,????????.... Delivery??,??Ship Confirm???,???????,"Defer Interface"?????,?????????Interface Trip Stop SRS,????Defer Interface,?OK? Delivery was successfully confirmed!!! Ship Confirm????????????MTL_TRANSACTIONS_INTERFACE??,??MTI??????Sales Order Issue,??????????Interface Trip Stop???,???MTI??MMT??? ??:????,Shipping Transaction?Line Status?"Shipped",Next Step?"Run Interfaces";????????Booked,??????”Shipped“? Interface Trip Stop - SRS ?????Ship Confirm??????Defer Interface,??????????????Interface Trip Stop - SRS? ??:Order Management > Shipping > Interface > Run > Request:Interface Trip Stop - SRS Interface Trip Stop????????:Inventory Interface  SRS(????????)? Order Management Interface  SRS(?????????????AR??)? Inventory Interface  SRS???Shipping Transaction??????MTI,??INV Manager????MTI????MMT??,??Sales Order Issue?transaction??????,???????????Reservation????Inventory Interface  SRS?????,???WSH_DELIVERY_DETAILS??INV_INTERFACED_FLAG???Y? Order Management Interface - SRS??Inventory Interface  SRS?????,??Request?????????????AR??,OM Interface????????WSH_DELIVERY_DETAILS??OE_INTERFACED_FLAG?Y? ??:????,Shipping Transaction?Line Status?"Interfaced",Next Step?"Not Applicable";????????Booked,??????”Shipped“? Workflow background Process ??:Inventory > Workflow Background Engine Item Type:OM Order Line Process Deferred:Yes Process Timeout:No ??program????Deffered???workflow,Workflow Background Process???,???????Order????RA Interface???(RA_INTERFACE_LINES_ALL,RA_INTERFACE_SALESCREDITS_ALL,RA_Interface_distribution) ????????SQL???RA Interface??: 1.SELECT * FROM RA_INTERFACE_LINES_ALL WHERE sales_order = '65961'; 2.SELECT * FROM RA_INTERFACE_SALESCREDITS_ALL WHERE INTERFACE_LINE_ID IN (SELECT INTERFACE_LINE_ID FROM RA_INTERFACE_LINES_ALL WHERE sales_order = '65961' ); 3.SELECT * FROM RA_INTERFACE_DISTRIBUTIONS_ALL WHERE INTERFACE_LINE_ID IN (SELECT INTERFACE_LINE_ID FROM RA_INTERFACE_LINES_ALL WHERE sales_order = '65961' ); ?????RA Interface??,??OE_ORDER_LINES_ALL?INVOICE_INTERFACE_STATUS_CODE????? Yes,INVOICED_QUANTITY?????????????????????????Closed,????????Booked? AutoInvoice ????AR?? ??:Account Receivable > Interface > AutoInvoice Name:Autoinvoice Master Program Invoice Source:Order Entry Default Day:???? ???,?request????”Autoinvoice Import Program“???? ???,????Auto Invoice Program????RA?interface?,?????????????,???????AR???? (RA_CUSTOMER_TRX_ALL,RA_CUSTOMER_TRX_LINES,AR_PAYMENT_SCHEDULES). ?????? Order > Action > Additional Information > Invoices/Credit Memos????????,???????SQL?????AR??, SELECT ooha.order_number , oola.line_number so_line_number , oola.ordered_item , oola.ordered_quantity * oola.unit_selling_price so_extended_price , rcta.trx_number invoice_number , rcta.trx_date , rctla.line_number inv_line_number , rctla.unit_selling_price inv_unit_selling_price FROM oe_order_headers_all ooha , oe_order_lines_all oola , ra_customer_trx_all rcta , ra_customer_trx_lines_all rctla WHERE ooha.header_id = oola.header_id AND rcta.customer_trx_id = rctla.customer_trx_id AND rctla.interface_line_attribute6 = TO_CHAR (oola.line_id) AND rctla.interface_line_attribute1 = TO_CHAR (ooha.order_number) AND order_number = :p_order_number; ??Autoinvoice Import Program???error???,?????RA_INTERFACE_ERRORS_ALL?Message_text??,???????? Closing the Order ?????????,?????????(Close??Cancel)?0.5?,??????Workflow Background Process??????? ????????:you can wait until month-end and the “Order Flow – Generic” workflow will close it for you. Order&Shipping Transactions Status Summary Step Order Header Status Order Line Status Order Flow Workflow Status (Order Header) Line Flow Workflow Status (Order Line) Shipping Transaction  Status(RELEASED_STATUS in WDD) 1. Enter an Order Entered Entered Book Order Manual Enter – Line                              N/A 2. Book the Order Booked Awaiting Shipping Close Order Schedule ->Create Supply ->Ship – Line                       Ready to Release(R) 3. Pick the Order Booked Picked Close Order Ship – Line 1.Released to Warehouse(S)(Pick Release but not pick confirm) 2.Staged/Pick Confirmed(Y)(After pick confirm) 4. Ship the Order Booked Shipped Close Order Fulfill – Deferred 1.Shipped(After ship confirm) 2.Interfaced(C)(After ITS) Booked Closed Close Order Fulfill ->Invoice Interface ->Close Line -> End 5. Close the Order Closed Closed End End ????,shipping txn???,??????????:http://blog.csdn.net/pan_tian/article/details/7696528 ======EOF======

    Read the article

  • Hyper-V File Server Clustering - at my wit’s end

    - by René Kåbis
    I am at my wit’s end with File Server clustering under Hyper-V. I am hoping that someone might be able to help me figure out this Gordian Knot of a technology that seems to have dead ends (like forcing cluster VMs to use iSCSI drives where normally-attached VHDX drives could suffice) where logic and reason would normally provide a logical solution. My hardware: I will be running three servers (in the end), but right now everything is taking place on one server. One of the secondary servers will exist purely as a witness/quorum, and another slightly more powerful one will be acting as an emergency backup (with additional storage, just not redundant) to hold the secondary AD VM and the other halves of a set of clustered VMs: the SQL VM and the file system VM. Please note, these each are the depreciated nodes of a cluster, the main nodes will be on the most powerful first machine. My heavy lifter is a machine that also contains all of the truly redundant storage on the network. If this gives anyone the heebie-geebies, too bad. It has a 6TB (usable) RAID-10 array, and will (in the end) hold the primary nodes of both aforementioned clusters, but is right now holding all VMs. This is, right now: DC01, DC02, SQL01, SQL02, FS01 & FS02. Eventually, I will be adding additional VMs to handle Exchange, Sharepoint and Lync, but only to this main server (the secondary server won't be able to handle more than three or four VMs, so why burden it? The AD, SQL & FS VMs are the most critical for the business). If anyone is now saying, “wait, what about a SAN or a NAS for the file servers?”, well too bad. What exists on the main machine is what I have to deal with. I followed these instructions, but I seem to be unable to get things to work. In order to make the file server truly redundant, I cannot trust any one machine to hold the only data store on the network. Therefore, I have created a set of iSCSI drives on the VM-host of the main machine, and attached one to each file server VM. The end result is that I want my FS01 to sit on the heavy lifter, along with its iSCSI “drive”, and FS02 will sit on the secondary machine with its own iSCSI “drive” there as well. That is, neither iSCSI drive will end up sitting on the same machine as the other. As such, the clustered FS will utterly duplicate the contents of the iSCSI drives between each other, so that if one physical machine (or the FS VM) goes toes-up, the other has got a full copy of the data on its own iSCSI drive. My problem occurs when I try to apply the file server role within the failover cluster manager. Actually, it is even before that -- it occurs when adding the disks. Since I have added each disk preferentially to a specific VM (by limiting the initiator by DNS hostname, and by adding two-way CHAP authentication), this forces each VM to be in control of its own iSCSI disk. However, when I try to add the disks to the Disks section of Storage within Failover Cluster Manager, the entire process fails for a random disk of the pair. That is, one will get online, but the other will remain offline because it does not have the correct “owner node”. I mean, really -- WTF? Of course it doesn’t have the right owner node, both drives are showing the same node name!! I cannot seem to have one drive show up with one node name as owner, and the other drive show up with the other node name as owner. And because both drives are not “online”, I cannot create a pool to apply to a cluster role. Talk about getting stuck between a rock and a hard place! I’ve got more to add, but my work is closing for the day and I have to wrap things up. I will try to add more tomorrow morning when I get in. My main objective is to have a file server VM on each machine, the storage on each machine, but a transparent failover in case one physical machine fails. Essentially, a failover FS that doesn’t care which machine fails -- the storage contents are replicated equally on each machine. Am I even heading in the right direction?

    Read the article

  • Ubuntu 11.10, using wget/curl fails with ssl

    - by Greg Spiers
    Note: See edit 3 for solution On a completely new install of Ubuntu I'm getting the following errors when using wget: wget https://test.sagepay.com --2012-03-27 12:55:12-- https://test.sagepay.com/ Resolving test.sagepay.com... 195.170.169.8 Connecting to test.sagepay.com|195.170.169.8|:443... connected. ERROR: cannot verify test.sagepay.com's certificate, issued by `/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)06/CN=VeriSign Class 3 Extended Validation SSL SGC CA': Unable to locally verify the issuer's authority. To connect to test.sagepay.com insecurely, use `--no-check-certificate'. I've tried installing ca-certificates and configuring the ca-certs and they appear to all be setup in /etc/ssl/certs. The same issue exists for cURL: curl https://test.sagepay.com curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed Which leads me to believe it's something wrong with openssl server wide. wget and curl both work correctly locally on OSX and I have confirmed with a few people that it's working on their servers so I suspect it's nothing to do with the server I'm attempting to connect to. Any ideas or suggestions on things to try to narrow it down? Thank you Edit As requested verbose output from curl curl -Iv https://test.sagepay.com * About to connect() to test.sagepay.com port 443 (#0) * Trying 195.170.169.8... connected * Connected to test.sagepay.com (195.170.169.8) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html Edit 2 Using the hash from your comment I see this: ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al 7651b327.0 lrwxrwxrwx 1 root root 59 2012-03-27 12:48 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority.pem ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al Verisign_Class_3_Public_Primary_Certification_Authority.pem lrwxrwxrwx 1 root root 94 2012-01-18 07:21 Verisign_Class_3_Public_Primary_Certification_Authority.pem -> /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ ls -al /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -rw-r--r-- 1 root root 834 2011-09-28 14:53 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt ubuntu@srv-tf6sq:/etc/ssl/certs$ more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- But doing the steps myself I end up with a different hash: strace -o /tmp/foo.out curl -Iv https://test.sagepay.com and grep ssl /tmp/foo.out open("/lib/x86_64-linux-gnu/libssl.so.1.0.0", O_RDONLY) = 3 stat("/etc/ssl/certs/415660c1.0", {st_mode=S_IFREG|0644, st_size=834, ...}) = 0 open("/etc/ssl/certs/415660c1.0", O_RDONLY) = 4 stat("/etc/ssl/certs/415660c1.1", 0x7fff7dab07b0) = -1 ENOENT (No such file or directory) readlink -f /etc/ssl/certs/415660c1.0 /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt more /usr/share/ca-certificates/mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt -----BEGIN CERTIFICATE----- MIICPDCCAaUCEDyRMcsf9tAbDpq40ES/Er4wDQYJKoZIhvcNAQEFBQAwXzELMAkG A1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFz cyAzIFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2 MDEyOTAwMDAwMFoXDTI4MDgwMjIzNTk1OVowXzELMAkGA1UEBhMCVVMxFzAVBgNV BAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmlt YXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUAA4GN ADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhE BarsAx94f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/is I19wKTakyYbnsZogy1Olhec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0G CSqGSIb3DQEBBQUAA4GBABByUqkFFBkyCEHwxWsKzH4PIRnN5GfcX6kb5sroc50i 2JhucwNhkcV8sEVAbkSdjbCxlnRhLQ2pRdKkkirWmnWXbj9T/UWZYB2oK0z5XqcJ 2HUw19JlYD1n1khVdWk/kfVIC0dpImmClr7JyDiGSnoscxlIaU5rfGW/D/xwzoiQ -----END CERTIFICATE----- Any other ideas? Thank you for the help so far :) Edit 3 So it turns out that installing the ca-certificates package didn't install the one that I needed. I found this post about certificates being presented out of order. This seems to be the case with my request to sagepay. The solution ended up being to install another CA certificate from Verisign. I'm not sure why this fixes the issue with it being out of order but it does, but I suspect the out of order issue really isn't a problem at all and it was infact because I was missing a certificate all along. The additional certificate is available in that post but I didn't want to blindly trust it. I've looked at the list of CA certificates from cURL's site and it is listed there so I do trust it. The certificate: Verisign Class 3 Public Primary Certification Authority ======================================================= -----BEGIN CERTIFICATE----- MIICPDCCAaUCEHC65B0Q2Sk0tjjKewPMur8wDQYJKoZIhvcNAQECBQAwXzELMAkGA1UEBhMCVVMx FzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAzIFB1YmxpYyBQcmltYXJ5 IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTk2MDEyOTAwMDAwMFoXDTI4MDgwMTIzNTk1OVow XzELMAkGA1UEBhMCVVMxFzAVBgNVBAoTDlZlcmlTaWduLCBJbmMuMTcwNQYDVQQLEy5DbGFzcyAz IFB1YmxpYyBQcmltYXJ5IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MIGfMA0GCSqGSIb3DQEBAQUA A4GNADCBiQKBgQDJXFme8huKARS0EN8EQNvjV69qRUCPhAwL0TPZ2RHP7gJYHyX3KqhEBarsAx94 f56TuZoAqiN91qyFomNFx3InzPRMxnVx0jnvT0Lwdd8KkMaOIG+YD/isI19wKTakyYbnsZogy1Ol hec9vn2a/iRFM9x2Fe0PonFkTGUugWhFpwIDAQABMA0GCSqGSIb3DQEBAgUAA4GBALtMEivPLCYA TxQT3ab7/AoRhIzzKBxnki98tsX63/Dolbwdj2wsqFHMc9ikwFPwTtYmwHYBV4GSXiHx0bH/59Ah WM1pF+NEHJwZRDmJXNycAA9WjQKZ7aKQRUzkuxCkPfAyAw7xzvjoyVGM5mKf5p/AfbdynMk2Omuf Tqj/ZA1k -----END CERTIFICATE----- I put this in a file in: /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt I then modified the /etc/ca-certificates.conf and added the following line at the end: curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt After that I ran the command: sudo update-ca-certificates Looking into the /etc/ssl/certs directory I see it correctly linked: ls -al | grep cURL lrwxrwxrwx 1 root root 69 2012-03-27 16:03 415660c1.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 69 2012-03-27 16:03 7651b327.0 -> Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem lrwxrwxrwx 1 root root 101 2012-03-27 16:03 Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.pem -> /usr/share/ca-certificates/curl/Verisign_Class_3_Public_Primary_Certification_Authority-from_cURL.crt And everything works! curl -I https://test.sagepay.com HTTP/1.1 200 OK...

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78  | Next Page >