Search Results

Search found 2824 results on 113 pages for 'supply chain'.

Page 105/113 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • C#: Optional Parameters - Pros and Pitfalls

    - by James Michael Hare
    When Microsoft rolled out Visual Studio 2010 with C# 4, I was very excited to learn how I could apply all the new features and enhancements to help make me and my team more productive developers. Default parameters have been around forever in C++, and were intentionally omitted in Java in favor of using overloading to satisfy that need as it was though that having too many default parameters could introduce code safety issues.  To some extent I can understand that move, as I’ve been bitten by default parameter pitfalls before, but at the same time I feel like Java threw out the baby with the bathwater in that move and I’m glad to see C# now has them. This post briefly discusses the pros and pitfalls of using default parameters.  I’m avoiding saying cons, because I really don’t believe using default parameters is a negative thing, I just think there are things you must watch for and guard against to avoid abuses that can cause code safety issues. Pro: Default Parameters Can Simplify Code Let’s start out with positives.  Consider how much cleaner it is to reduce all the overloads in methods or constructors that simply exist to give the semblance of optional parameters.  For example, we could have a Message class defined which allows for all possible initializations of a Message: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message() 5: : this(string.Empty) 6: { 7: } 8:  9: public Message(string text) 10: : this(text, null) 11: { 12: } 13:  14: public Message(string text, IDictionary<string, string> properties) 15: : this(text, properties, -1) 16: { 17: } 18:  19: public Message(string text, IDictionary<string, string> properties, long timeToLive) 20: { 21: // ... 22: } 23: }   Now consider the same code with default parameters: 1: public class Message 2: { 3: // can either cascade these like this or duplicate the defaults (which can introduce risk) 4: public Message(string text = "", IDictionary<string, string> properties = null, long timeToLive = -1) 5: { 6: // ... 7: } 8: }   Much more clean and concise and no repetitive coding!  In addition, in the past if you wanted to be able to cleanly supply timeToLive and accept the default on text and properties above, you would need to either create another overload, or pass in the defaults explicitly.  With named parameters, though, we can do this easily: 1: var msg = new Message(timeToLive: 100);   Pro: Named Parameters can Improve Readability I must say one of my favorite things with the default parameters addition in C# is the named parameters.  It lets code be a lot easier to understand visually with no comments.  Think how many times you’ve run across a TimeSpan declaration with 4 arguments and wondered if they were passing in days/hours/minutes/seconds or hours/minutes/seconds/milliseconds.  A novice running through your code may wonder what it is.  Named arguments can help resolve the visual ambiguity: 1: // is this days/hours/minutes/seconds (no) or hours/minutes/seconds/milliseconds (yes) 2: var ts = new TimeSpan(1, 2, 3, 4); 3:  4: // this however is visually very explicit 5: var ts = new TimeSpan(days: 1, hours: 2, minutes: 3, seconds: 4);   Or think of the times you’ve run across something passing a Boolean literal and wondered what it was: 1: // what is false here? 2: var sub = CreateSubscriber(hostname, port, false); 3:  4: // aha! Much more visibly clear 5: var sub = CreateSubscriber(hostname, port, isBuffered: false);   Pitfall: Don't Insert new Default Parameters In Between Existing Defaults Now let’s consider a two potential pitfalls.  The first is really an abuse.  It’s not really a fault of the default parameters themselves, but a fault in the use of them.  Let’s consider that Message constructor again with defaults.  Let’s say you want to add a messagePriority to the message and you think this is more important than a timeToLive value, so you decide to put messagePriority before it in the default, this gives you: 1: public class Message 2: { 3: public Message(string text = "", IDictionary<string, string> properties = null, int priority = 5, long timeToLive = -1) 4: { 5: // ... 6: } 7: }   Oh boy have we set ourselves up for failure!  Why?  Think of all the code out there that could already be using the library that already specified the timeToLive, such as this possible call: 1: var msg = new Message(“An error occurred”, myProperties, 1000);   Before this specified a message with a TTL of 1000, now it specifies a message with a priority of 1000 and a time to live of -1 (infinite).  All of this with NO compiler errors or warnings. So the rule to take away is if you are adding new default parameters to a method that’s currently in use, make sure you add them to the end of the list or create a brand new method or overload. Pitfall: Beware of Default Parameters in Inheritance and Interface Implementation Now, the second potential pitfalls has to do with inheritance and interface implementation.  I’ll illustrate with a puzzle: 1: public interface ITag 2: { 3: void WriteTag(string tagName = "ITag"); 4: } 5:  6: public class BaseTag : ITag 7: { 8: public virtual void WriteTag(string tagName = "BaseTag") { Console.WriteLine(tagName); } 9: } 10:  11: public class SubTag : BaseTag 12: { 13: public override void WriteTag(string tagName = "SubTag") { Console.WriteLine(tagName); } 14: } 15:  16: public static class Program 17: { 18: public static void Main() 19: { 20: SubTag subTag = new SubTag(); 21: BaseTag subByBaseTag = subTag; 22: ITag subByInterfaceTag = subTag; 23:  24: // what happens here? 25: subTag.WriteTag(); 26: subByBaseTag.WriteTag(); 27: subByInterfaceTag.WriteTag(); 28: } 29: }   What happens?  Well, even though the object in each case is SubTag whose tag is “SubTag”, you will get: 1: SubTag 2: BaseTag 3: ITag   Why?  Because default parameter are resolved at compile time, not runtime!  This means that the default does not belong to the object being called, but by the reference type it’s being called through.  Since the SubTag instance is being called through an ITag reference, it will use the default specified in ITag. So the moral of the story here is to be very careful how you specify defaults in interfaces or inheritance hierarchies.  I would suggest avoiding repeating them, and instead concentrating on the layer of classes or interfaces you must likely expect your caller to be calling from. For example, if you have a messaging factory that returns an IMessage which can be either an MsmqMessage or JmsMessage, it only makes since to put the defaults at the IMessage level since chances are your user will be using the interface only. So let’s sum up.  In general, I really love default and named parameters in C# 4.0.  I think they’re a great tool to help make your code easier to read and maintain when used correctly. On the plus side, default parameters: Reduce redundant overloading for the sake of providing optional calling structures. Improve readability by being able to name an ambiguous argument. But remember to make sure you: Do not insert new default parameters in the middle of an existing set of default parameters, this may cause unpredictable behavior that may not necessarily throw a syntax error – add to end of list or create new method. Be extremely careful how you use default parameters in inheritance hierarchies and interfaces – choose the most appropriate level to add the defaults based on expected usage. Technorati Tags: C#,.NET,Software,Default Parameters

    Read the article

  • Of C# Iterators and Performance

    - by James Michael Hare
    Some of you reading this will be wondering, "what is an iterator" and think I'm locked in the world of C++.  Nope, I'm talking C# iterators.  No, not enumerators, iterators.   So, for those of you who do not know what iterators are in C#, I will explain it in summary, and for those of you who know what iterators are but are curious of the performance impacts, I will explore that as well.   Iterators have been around for a bit now, and there are still a bunch of people who don't know what they are or what they do.  I don't know how many times at work I've had a code review on my code and have someone ask me, "what's that yield word do?"   Basically, this post came to me as I was writing some extension methods to extend IEnumerable<T> -- I'll post some of the fun ones in a later post.  Since I was filtering the resulting list down, I was using the standard C# iterator concept; but that got me wondering: what are the performance implications of using an iterator versus returning a new enumeration?   So, to begin, let's look at a couple of methods.  This is a new (albeit contrived) method called Every(...).  The goal of this method is to access and enumeration and return every nth item in the enumeration (including the first).  So Every(2) would return items 0, 2, 4, 6, etc.   Now, if you wanted to write this in the traditional way, you may come up with something like this:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         List<T> newList = new List<T>();         int count = 0;           foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 newList.Add(i);             }         }           return newList;     }     So basically this method takes any IEnumerable<T> and returns a new IEnumerable<T> that contains every nth item.  Pretty straight forward.   The problem?  Well, Every<T>(...) will construct a list containing every nth item whether or not you care.  What happens if you were searching this result for a certain item and find that item after five tries?  You would have generated the rest of the list for nothing.   Enter iterators.  This C# construct uses the yield keyword to effectively defer evaluation of the next item until it is asked for.  This can be very handy if the evaluation itself is expensive or if there's a fair chance you'll never want to fully evaluate a list.   We see this all the time in Linq, where many expressions are chained together to do complex processing on a list.  This would be very expensive if each of these expressions evaluated their entire possible result set on call.    Let's look at the same example function, this time using an iterator:       public static IEnumerable<T> Every<T>(this IEnumerable<T> list, int interval)     {         int count = 0;         foreach (var i in list)         {             if ((count++ % interval) == 0)             {                 yield return i;             }         }     }   Notice it does not create a new return value explicitly, the only evidence of a return is the "yield return" statement.  What this means is that when an item is requested from the enumeration, it will enter this method and evaluate until it either hits a yield return (in which case that item is returned) or until it exits the method or hits a yield break (in which case the iteration ends.   Behind the scenes, this is all done with a class that the CLR creates behind the scenes that keeps track of the state of the iteration, so that every time the next item is asked for, it finds that item and then updates the current position so it knows where to start at next time.   It doesn't seem like a big deal, does it?  But keep in mind the key point here: it only returns items as they are requested. Thus if there's a good chance you will only process a portion of the return list and/or if the evaluation of each item is expensive, an iterator may be of benefit.   This is especially true if you intend your methods to be chainable similar to the way Linq methods can be chained.    For example, perhaps you have a List<int> and you want to take every tenth one until you find one greater than 10.  We could write that as:       List<int> someList = new List<int>();         // fill list here         someList.Every(10).TakeWhile(i => i <= 10);     Now is the difference more apparent?  If we use the first form of Every that makes a copy of the list.  It's going to copy the entire list whether we will need those items or not, that can be costly!    With the iterator version, however, it will only take items from the list until it finds one that is > 10, at which point no further items in the list are evaluated.   So, sounds neat eh?  But what's the cost is what you're probably wondering.  So I ran some tests using the two forms of Every above on lists varying from 5 to 500,000 integers and tried various things.    Now, iteration isn't free.  If you are more likely than not to iterate the entire collection every time, iterator has some very slight overhead:   Copy vs Iterator on 100% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 5 Copy 5 5 5 Iterator 5 50 50 Copy 28 50 50 Iterator 27 500 500 Copy 227 500 500 Iterator 247 5000 5000 Copy 2266 5000 5000 Iterator 2444 50,000 50,000 Copy 24,443 50,000 50,000 Iterator 24,719 500,000 500,000 Copy 250,024 500,000 500,000 Iterator 251,521   Notice that when iterating over the entire produced list, the times for the iterator are a little better for smaller lists, then getting just a slight bit worse for larger lists.  In reality, given the number of items and iterations, the result is near negligible, but just to show that iterators come at a price.  However, it should also be noted that the form of Every that returns a copy will have a left-over collection to garbage collect.   However, if we only partially evaluate less and less through the list, the savings start to show and make it well worth the overhead.  Let's look at what happens if you stop looking after 80% of the list:   Copy vs Iterator on 80% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 4 Copy 5 5 4 Iterator 5 50 40 Copy 27 50 40 Iterator 23 500 400 Copy 215 500 400 Iterator 200 5000 4000 Copy 2099 5000 4000 Iterator 1962 50,000 40,000 Copy 22,385 50,000 40,000 Iterator 19,599 500,000 400,000 Copy 236,427 500,000 400,000 Iterator 196,010       Notice that the iterator form is now operating quite a bit faster.  But the savings really add up if you stop on average at 50% (which most searches would typically do):     Copy vs Iterator on 50% of Collection (10,000 iterations) Collection Size Num Iterated Type Total ms 5 2 Copy 5 5 2 Iterator 4 50 25 Copy 25 50 25 Iterator 16 500 250 Copy 188 500 250 Iterator 126 5000 2500 Copy 1854 5000 2500 Iterator 1226 50,000 25,000 Copy 19,839 50,000 25,000 Iterator 12,233 500,000 250,000 Copy 208,667 500,000 250,000 Iterator 122,336   Now we see that if we only expect to go on average 50% into the results, we tend to shave off around 40% of the time.  And this is only for one level deep.  If we are using this in a chain of query expressions it only adds to the savings.   So my recommendation?  If you have a resonable expectation that someone may only want to partially consume your enumerable result, I would always tend to favor an iterator.  The cost if they iterate the whole thing does not add much at all -- and if they consume only partially, you reap some really good performance gains.   Next time I'll discuss some of my favorite extensions I've created to make development life a little easier and maintainability a little better.

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • How to diagnose computer lockup/freezing problem

    - by Scott Mitchell
    I built a desktop computer a couple years back with the following specs: CPU: Intel Core 2 Quad Q9300 Yorkfield 2.5GHz 6MB L2 Cache LGA 775 95W Quad-Core Processor BX80580Q9300 Motherboard: EVGA 122-CK-NF68-T1 LGA 775 NVIDIA nForce 680i SLI ATX Intel Motherboard Video Card: Two EVGA 256-P2-N758-TR GeForce 8600GT SCC 256MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card PSU: SeaSonic S12 Energy Plus SS-550HT 550W ATX12V V2.3 / EPS12V V2.91 SLI Certified CrossFire Ready 80 PLUS Certified Active PFC Power Supply Memory: Two G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ Since its inception, the machine has periodically locked up, the regularlity having varied over the years from once a day to once a month. Typically, lockups happen once every few days. By "lockup" I mean my computer just freezes. The screen locks up, I can't move the mouse. Hitting keys on my keyboard that normally turn LEDs on or off on the keyboard (such as Caps Lock) no longer turn the LEDs on or off. If there was music playing at the time of the lockup, noise keeps coming out of the speakers, but it's just the current frequency/note that plays indefinitely. There is no BSOD. When such a lockup occurs I have to do a hard reboot by either turning off the computer or hitting the reset button. I have the most recent version of the NVIDIA hardware drivers, and update them semi-regularly, but that hasn't seemed to help. I am currently using Windows 7 x64, but was previously using Windows Server 2003 x64 and having the same lockup issues. My guess is that it's somehow video driver or motherboard related, but I don't know how to go about diagnosing this problem to narrow down which of the two is the culprit. Additional information re: cooling Regarding cooling... I've not installed any after-market cooling systems aside from two regular fans I scavenged from an older computer. The fan atop the CPU is the one that shipped with it. One of the two scavenged fans I added it located at the bottom tower of the corner, in an attempt to create some airflow from front to back. The second fan is pointed directly at the two video cards. SpeedFan installation and readings Per studiohack's suggestion, I installed SpeedFan, which provided the following temperature readings: GPU: 63C GPU: 65C System: 76C CPU: 64C AUX: 36C Core 0: 78C Core 1: 76C Core 2: 79C Core 3: 79C Update #3: Another Lockup :-( Well, I had another lockup last night. :-( SpeedFan reported the CPU temp at 38 C when it happened, and there was no spike in temperature leading up to the freeze. One thing I notice is that the freeze seems more likely to happen if I am watching a video. In fact, of the last 5 freezes over the past month, 4 of them have been while watching a video on Flickr. Not necessarily the same video, but a video nevertheless. I don't know if this is just coincidence or if it means anything. (As an aside, each night before bedtime my 2 year old daughter sits on my lap and watches some home videos on Flickr and, in the last month, has learned the phrase, "Uh oh, computer broke.") Update #4: MemTest86 and 3DMark06 Test Results: Per suggestions in the comments, I ran the MemTest86 overnight and it cycled through the 8 GB of memory 5 times without error. I also ran the 3DMark06 test without a problem (see my scores at http://3dmark.com/3dm06/15163549). So... what now? :-) Any further suggestions on what to check? Is there some way to get a stack trace or something when the computer locks like that? Thanks

    Read the article

  • What do these messages in the Qmail maillog indicate?

    - by Griffo
    There seems to be an endless supply of messages in the Qmail maillog for a single address. Can anyone shed some light on why this might be and whether it is a problem? To me it looks like either spam or some sort of unhandled problem. It strikes me as unusual that the 'from=' field is blank. This is on a VPS using Plesk in case that's important. Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23593]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23586]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23585]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23584]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23583]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23600]: [email protected] Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: from= Jun 30 15:10:17 vps-1001108-595 qmail-remote-handlers[23599]: [email protected] EDIT Here's a sample of one of the emails: Received: (qmail 5603 invoked for bounce); 29 Jun 2011 07:46:31 +0100 Date: 29 Jun 2011 07:46:31 +0100 From: [email protected] To: [email protected] Subject: failure notice Hi. This is the qmail-send program at vps-1001108-595.cp.blacknight.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 200.147.36.13 does not like recipient. Remote host said: 450 4.7.1 Client host rejected: cannot find your hostname, [78.153.208.195] Giving up on 200.147.36.13. I'm not going to try again; this message has been in the queue too long. --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15585 invoked by uid 48); 22 Jun 2011 07:38:26 +0100 Date: 22 Jun 2011 07:38:26 +0100 Message-ID: <[email protected]> To: [email protected] Subject: Cadastre-se e Concorra ? um Carro! MIME-Version: 1.0 Content-type: text/html; charset=iso-8859-1 From: Cielo Fidelidade <[email protected]> <!DOCTYPE HTML> <html> ... <body text removed> <html> If I understand this correctly, this is saying that an email sent by my server, from address [email protected], could not be delivered. However, [email protected] is not a valid email address on my server, so how can email be sent from this address on my server? I have tested whether my server is acting as an open relay, and it isn't. So how else could this be happening? I am getting thousands of these every day. What can I do to prevent it?

    Read the article

  • Is it possible to repair a Cisco 3500 XL (3548) switch with POST Error messages?

    - by Alex
    I've got an old Cisco 3500 XL, and it seems to have hardware issues. I've loaded the latest IOS and cleared all config. Does anyone have any experience fixing the switch core? I'm a reasonably competent SMD solderer, can I replace/reflow some chips? I've checked the power supply voltages and it's all within tolerance, and no visible signs of any component damage. Some chips are hot to the touch. I understand that these were EOL as of 2007, but should have a lifetime warranty for the electronics. I don't have a Cisco support contract, so I can't file a ticket. What should I do? Console output: switch: dir flash: Directory of flash:/ 2 -rwx 1811584 <date> c3500xl-c3h2s-mz.120-5.WC17.bin 1799680 bytes available (1812992 bytes used) switch: boot Loading "flash:c3500xl-c3h2s-mz.120-5.WC17.bin"...################################################################################################################################################################################### File "flash:c3500xl-c3h2s-mz.120-5.WC17.bin" uncompressed and installed, entry point: 0x3000 executing... Restricted Rights Legend Use, duplication, or disclosure by the Government is subject to restrictions as set forth in subparagraph (c) of the Commercial Computer Software - Restricted Rights clause at FAR sec. 52.227-19 and subparagraph (c) (1) (ii) of the Rights in Technical Data and Computer Software clause at DFARS sec. 252.227-7013. cisco Systems, Inc. 170 West Tasman Drive San Jose, California 95134-1706 Cisco Internetwork Operating System Software IOS (tm) C3500XL Software (C3500XL-C3H2S-M), Version 12.0(5)WC17, RELEASE SOFTWARE (fc1) Copyright (c) 1986-2007 by cisco Systems, Inc. Compiled Tue 13-Feb-07 15:04 by antonino Image text-base: 0x00003000, data-base: 0x00352924 Initializing C3500XL flash... flashfs[1]: 1 files, 1 directories flashfs[1]: 0 orphaned files, 0 orphaned directories flashfs[1]: Total bytes: 3612672 flashfs[1]: Bytes used: 1812992 flashfs[1]: Bytes available: 1799680 flashfs[1]: flashfs fsck took 3 seconds. flashfs[1]: Initialization complete. ...done Initializing C3500XL flash. C3500XL POST: System Board Test: Passed C3500XL POST: Daughter Card Test: Passed C3500XL POST: CPU Buffer Test: Passed C3500XL POST: CPU Notify RAM Test: Passed C3500XL POST: CPU Interface Test: Passed C3500XL POST: Testing Switch Core: Passed Error with Switch Core BIST test Phase 0. Returns: Test Complete Low : 0x0FFFFFFF, Test Complete High : 0xFFFFFFFE Test Phase Low : 0x00000040, Test Phase High : 0x00000000 Test Phase Third : 0x00000000, Test Complete Third : 0x000001F8 C3500XL POST FAILURE: Testing Switch Core: Failed C3500XL POST FAILURE: Testing Buffer Table: Failed C3500XL POST FAILURE: Data Buffer Test: Failed C3500XL POST FAILURE: Configuring Switch Parameters: Failed C3500XL POST FAILURE: Switch Core BIST failed. C3500XL POST FAILURE: Cannot test Modules due to failure of Switch Core POST Del Mar Failure (0th Del Mar): req system failed to init C3500XL POST FAILURE: C3500XL POST FAILURE: ATM: required system failed to init C3500XL POST: Ethernet Controller Test: Passed C3500XL POST FAILURE: MII Test: Failed C3500XL POST FAILURE: Error waiting for Ethernet Controller and SW_PARAMS C3500XL POST FAILURE: Initialization/POST failed C3500XL POST FAILURE: AT: Failing because system POST failed Exception (8192)! Debug Exception (Could be NULL pointer dereference) CPU Register Context: Vector = 0x00002000 PC = 0x000F36F4 MSR = 0x00029200 CR = 0x22000024 LR = 0x000F6964 CTR = 0x001DE46C XER = 0x00000000 R0 = 0x00000000 R1 = 0x004E2580 R2 = 0x00000000 R3 = 0x00000000 R4 = 0x00000001 R5 = 0x00000000 R6 = 0x004E2718 R7 = 0x004E2718 R8 = 0x00000008 R9 = 0x00000000 R10 = 0x0000FFFF R11 = 0x00480000 R12 = 0x42000024 R13 = 0x00000000 R14 = 0x00000000 R15 = 0x00000000 R16 = 0x00000000 R17 = 0x00000000 R18 = 0x00000000 R19 = 0x00000000 R20 = 0x00000000 R21 = 0x00000000 R22 = 0x00000000 R23 = 0x00000000 R24 = 0x00000000 R25 = 0x00000020 R26 = 0x004E2718 R27 = 0x004E2718 R28 = 0x00000020 R29 = 0x00002513 R30 = 0x00000001 R31 = 0x00000000 Stack trace: PC = 0x000F36F4, SP = 0x004E2580 Frame 00: SP = 0x004E25A0 PC = 0x40000016 Frame 01: SP = 0x004E2618 PC = 0x000F6964 Frame 02: SP = 0x004E26A8 PC = 0x000F76DC Frame 03: SP = 0x004E26C8 PC = 0x000E8114 Frame 04: SP = 0x004E26F0 PC = 0x001F5BF8 Frame 05: SP = 0x004E2710 PC = 0x001F5CF4 Frame 06: SP = 0x004E2748 PC = 0x0023F4DC Frame 07: SP = 0x004E2750 PC = 0x0023E650 Frame 08: SP = 0x004E27C8 PC = 0x0023E89C Frame 09: SP = 0x004E27E0 PC = 0x0028AF34 Frame 10: SP = 0x004E27E8 PC = 0x001E38F8 Frame 11: SP = 0x004E2808 PC = 0x001E39A8 Frame 12: SP = 0x004E2820 PC = 0x0014E220 Frame 13: SP = 0x004E28C8 PC = 0x0014E39C Frame 14: SP = 0x00000000 PC = 0x001EB510

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • PowerShell Script to Enumerate SharePoint 2010 or 2013 Permissions and Active Directory Group Membership

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/07/01/powershell-script-to-enumerate-sharepoint-2010-or-2013-permissions-and.aspx   In this post I will present a script to enumerate SharePoint 2010 or 2013 permissions across the entire farm down to the site (SPWeb) level.  As a bonus this script also recursively expands the membership of any Active Directory (AD) group including nested groups which you wouldn’t be able to find through the SharePoint UI.   History     Back in 2009 (over 4 years ago now) I published one my most read blog posts about enumerating SharePoint 2007 permissions.  I finally got around to updating that script to remove deprecated APIs, supporting the SharePoint 2010 commandlets, and fixing a few bugs.  There are 2 things that script did that I had to remove due to major architectural or procedural changes in the script. Indenting the XML output Ability to search for a specific user    I plan to add back the ability to search for a specific user but wanted to get this version published first.  As for indenting the XML that could be added but would take some effort.  If there is user demand for it (let me know in the comments or email me using the contact button at top of blog) I’ll move it up in priorities.    As a side note you may also notice that I’m not using the Active Directory commandlets.  This was a conscious decision since not all environments have them available.  Instead I’m relying on the older [ADSI] type accelerator and APIs.  It does add a significant amount of code to the script but it is necessary for compatibility.  Hopefully in a few years if I need to update again I can remove that legacy code.   Solution    Below is the script to enumerate SharePoint 2010 and 2013 permissions down to site level.  You can also download it from my SkyDrive account or my posting on the TechNet Script Center Repository. SkyDrive TechNet Script Center Repository http://gallery.technet.microsoft.com/scriptcenter/Enumerate-SharePoint-2010-35976bdb   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 ########################################################### #DisplaySPWebApp8.ps1 # #Author: Brian T. Jackett #Last Modified Date: 2013-07-01 # #Traverse the entire web app site by site to display # hierarchy and users with permissions to site. ########################################################### function Expand-ADGroupMembership {     Param     (         [Parameter(Mandatory=$true,                    Position=0)]         [string]         $ADGroupName,         [Parameter(Position=1)]         [string]         $RoleBinding     )     Process     {         $roleBindingText = ""         if(-not [string]::IsNullOrEmpty($RoleBinding))         {             $roleBindingText = " RoleBindings=`"$roleBindings`""         }         Write-Output "<ADGroup Name=`"$($ADGroupName)`"$roleBindingText>"         $domain = $ADGroupName.substring(0, $ADGroupName.IndexOf("\") + 1)         $groupName = $ADGroupName.Remove(0, $ADGroupName.IndexOf("\") + 1)                                     #BEGIN - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         #http://www.microsoft.com/technet/scriptcenter/scripts/powershell/search/users/srch106.mspx         #GET AD GROUP FROM DIRECTORY SERVICES SEARCH         $strFilter = "(&(objectCategory=Group)(name="+($groupName)+"))"         $objDomain = New-Object System.DirectoryServices.DirectoryEntry         $objSearcher = New-Object System.DirectoryServices.DirectorySearcher         $objSearcher.SearchRoot = $objDomain         $objSearcher.Filter = $strFilter         # specify properties to be returned         $colProplist = ("name","member","objectclass")         foreach ($i in $colPropList)         {             $catcher = $objSearcher.PropertiesToLoad.Add($i)         }         $colResults = $objSearcher.FindAll()         #END - CODE ADAPTED FROM SCRIPT CENTER SAMPLE CODE REPOSITORY         foreach ($objResult in $colResults)         {             if($objResult.Properties["Member"] -ne $null)             {                 foreach ($member in $objResult.Properties["Member"])                 {                     $indMember = [adsi] "LDAP://$member"                     $fullMemberName = $domain + ($indMember.Name)                                         #if($indMember["objectclass"]                         # if child AD group continue down chain                         if(($indMember | Select-Object -ExpandProperty objectclass) -contains "group")                         {                             Expand-ADGroupMembership -ADGroupName $fullMemberName                         }                         elseif(($indMember | Select-Object -ExpandProperty objectclass) -contains "user")                         {                             Write-Output "<ADUser>$fullMemberName</ADUser>"                         }                 }             }         }                 Write-Output "</ADGroup>"     } } #end Expand-ADGroupMembership # main portion of script if((Get-PSSnapin -Name microsoft.sharepoint.powershell) -eq $null) {     Add-PSSnapin Microsoft.SharePoint.PowerShell } $farm = Get-SPFarm Write-Output "<Farm Guid=`"$($farm.Id)`">" $webApps = Get-SPWebApplication foreach($webApp in $webApps) {     Write-Output "<WebApplication URL=`"$($webApp.URL)`" Name=`"$($webApp.Name)`">"     foreach($site in $webApp.Sites)     {         Write-Output "<SiteCollection URL=`"$($site.URL)`">"                 foreach($web in $site.AllWebs)         {             Write-Output "<Site URL=`"$($web.URL)`">"             # if site inherits permissions from parent then stop processing             if($web.HasUniqueRoleAssignments -eq $false)             {                 Write-Output "<!-- Inherits role assignments from parent -->"             }             # else site has unique permissions             else             {                 foreach($assignment in $web.RoleAssignments)                 {                     if(-not [string]::IsNullOrEmpty($assignment.Member.Xml))                     {                         $roleBindings = ($assignment.RoleDefinitionBindings | Select-Object -ExpandProperty name) -join ","                         # check if assignment is SharePoint Group                         if($assignment.Member.XML.StartsWith('<Group') -eq "True")                         {                             Write-Output "<SPGroup Name=`"$($assignment.Member.Name)`" RoleBindings=`"$roleBindings`">"                             foreach($SPGroupMember in $assignment.Member.Users)                             {                                 # if SharePoint group member is an AD Group                                 if($SPGroupMember.IsDomainGroup)                                 {                                     Expand-ADGroupMembership -ADGroupName $SPGroupMember.Name                                 }                                 # else SharePoint group member is an AD User                                 else                                 {                                     # remove claim portion of user login                                     #Write-Output "<ADUser>$($SPGroupMember.UserLogin.Remove(0,$SPGroupMember.UserLogin.IndexOf("|") + 1))</ADUser>"                                     Write-Output "<ADUser>$($SPGroupMember.UserLogin)</ADUser>"                                 }                             }                             Write-Output "</SPGroup>"                         }                         # else an indivdually listed AD group or user                         else                         {                             if($assignment.Member.IsDomainGroup)                             {                                 Expand-ADGroupMembership -ADGroupName $assignment.Member.Name -RoleBinding $roleBindings                             }                             else                             {                                 # remove claim portion of user login                                 #Write-Output "<ADUser>$($assignment.Member.UserLogin.Remove(0,$assignment.Member.UserLogin.IndexOf("|") + 1))</ADUser>"                                                                 Write-Output "<ADUser RoleBindings=`"$roleBindings`">$($assignment.Member.UserLogin)</ADUser>"                             }                         }                     }                 }             }             Write-Output "</Site>"             $web.Dispose()         }         Write-Output "</SiteCollection>"         $site.Dispose()     }     Write-Output "</WebApplication>" } Write-Output "</Farm>"      The output from the script can be sent to an XML which you can then explore using the [XML] type accelerator.  This lets you explore the XML structure however you see fit.  See the screenshot below for an example.      If you do view the XML output through a text editor (Notepad++ for me) notice the format.  Below we see a SharePoint site that has a SharePoint group Demo Members with Edit permissions assigned.  Demo Members has an AD group corp\developers as a member.  corp\developers has a child AD group called corp\DevelopersSub with 1 AD user in that sub group.  As you can see the script recursively expands the AD hierarchy.   Conclusion    It took me 4 years to finally update this script but I‘m happy to get this published.  I was able to fix a number of errors and smooth out some rough edges.  I plan to develop this into a more full fledged tool over the next year with more features and flexibility (copy permissions, search for individual user or group, optional enumerate lists / items, etc.).  If you have any feedback, feature requests, or issues running it please let me know.  Enjoy the script!         -Frog Out

    Read the article

  • Incorrect gzipping of http requests, can't find who's doing it

    - by Ned Batchelder
    We're seeing some very strange mangling of HTTP responses, and we can't figure out what is doing it. We have an app server handling JSON requests. Occasionally, the response is returned gzipped, but with incorrect headers that prevent the browser from interpreting it correctly. The problem is intermittent, and changes behavior over time. Yesterday morning it seemed to fail 50% of the time, and in fact, seemed tied to one of our two load-balanced servers. Later in the afternoon, it was failing only 20 times out of 1000, and didn't correlate with an app server. The two app servers are running Apache 2.2 with mod_wsgi and a Django app stack. They have identical Apache configs and source trees, and even identical packages installed on Red Hat. There's a hardware load balancer in front, I don't know the make or model. Akamai is also part of the food chain, though we removed Akamai and still had the problem. Here's a good request and response: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Length: 47 < Content-Type: application/x-javascript < Connection: keep-alive < Vary: Accept-Encoding < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 {"msg": "", "status": "OK", "printer_name": ""} And here's a bad one: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Type: application/x-javascript < Content-Encoding: gzip < Content-Length: 59 < Connection: keep-alive < Vary: Accept-Encoding < X-N: S < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 ?V?-NW?RPR?QP*.I,)-???A??????????T??Z? ??/ There are two things to notice about the bad response: It has two Content-Encoding headers, and the browsers seem to use the first. So they see an identity encoding header, and gzipped content, so they can't interpret the response. The bad response has an extra "X-N: S" header. Perhaps if I could find out what intermediary adds "X-N: S" headers to responses, I could track down the culprit...

    Read the article

  • Using Sitecore RenderingContext Parameters as MVC controller action arguments

    - by Kyle Burns
    I have been working with the Technical Preview of Sitecore 6.6 on a project and have been for the most part happy with the way that Sitecore (which truly is an MVC implementation unto itself) has been expanded to support ASP.NET MVC. That said, getting up to speed with the combined platform has not been entirely without stumbles and today I want to share one area where Sitecore could have really made things shine from the "it just works" perspective. A couple days ago I was asked by a colleague about the usage of the "Parameters" field that is defined on Sitecore's Controller Rendering data template. Based on the standard way that Sitecore handles a field named Parameters, I was able to deduce that the field expected key/value pairs separated by the "&" character, but beyond that I wasn't sure and didn't see anything from a documentation perspective to guide me, so it was time to dig and find out where the data in the field was made available. My first thought was that it would be really nice if Sitecore handled the parameters in this field consistently with the way that ASP.NET MVC handles the various parameter collections on the HttpRequest object and automatically maps them to parameters of the action method executing. Being the hopeful sort, I configured a name/value pair on one of my renderings, added a parameter with matching name to the controller action and fired up the bugger to see... that the parameter was not populated. Having established that the field's value was not going to be presented to me the way that I had hoped it would, the next assumption that I would work on was that Sitecore would handle this field similar to how they handle other similar data and would plug it into some ambient object that I could reference from within the controller method. After a considerable amount of guessing, testing, and cracking code open with Redgate's Reflector (a must-have companion to Sitecore documentation), I found that the most direct way to access the parameter was through the ambient RenderingContext object using code similar to: string myArgument = string.Empty; var rc = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull; if (rc != null) {     var parms = rc.Rendering.Parameters;     myArgument = parms["myArgument"]; } At this point, we know how this field is used out of the box from Sitecore and can provide information from Sitecore's Content Editor that will be available when the controller action is executing, but it feels a little dirty. In order to properly test the action method I would have to do a lot of setup work and possible use an isolation framework such as Pex and Moles to get at a value that my action method is dependent upon. Notice I said that my method is dependent upon the value but in order to meet that dependency I've accepted another dependency upon Sitecore's RenderingContext.  I'm a big believer in, when possible, ensuring that any piece of code explicitly advertises dependencies using the method signature, so I found myself still wanting this to work the same as if the parameters were in the request route, querystring, or form by being able to add a myArgument parameter to the action method and have this parameter populated by the framework. Lucky for us, the ASP.NET MVC framework is extremely flexible and provides some easy to grok and use extensibility points. ASP.NET MVC is able to provide information from the request as input parameters to controller actions because it uses objects which implement an interface called IValueProvider and have been registered to service the application. The most basic statement of responsibility for an IValueProvider implementation is "I know about some data which is indexed by key. If you hand me the key for a piece of data that I know about I give you that data". When preparing to invoke a controller action, the framework queries registered IValueProvider implementations with the name of each method argument to see if the ValueProvider can supply a value for the parameter. (the rest of this post will assume you're working along and make a lot more sense if you do) Let's pull Sitecore out of the equation for a second to simplify things and create an extremely simple IValueProvider implementation. For this example, I first create a new ASP.NET MVC3 project in Visual Studio, selecting "Internet Application" and otherwise taking defaults (I'm assuming that anyone reading this far in the post either already knows how to do this or will need to take a quick run through one of the many available basic MVC tutorials such as the MVC Music Store). Once the new project is created, go to the Index action of HomeController.  This action sets a Message property on the ViewBag to "Welcome to ASP.NET MVC!" and invokes the View, which has been coded to display the Message. For our example, we will remove the hard coded message from this controller (although we'll leave it just as hard coded somewhere else - this is sample code). For the first step in our exercise, add a string parameter to the Index action method called welcomeMessage and use the value of this argument to set the ViewBag.Message property. The updated Index action should look like: public ActionResult Index(string welcomeMessage) {     ViewBag.Message = welcomeMessage;     return View(); } This represents the entirety of the change that you will make to either the controller or view.  If you run the application now, the home page will display and no message will be presented to the user because no value was supplied to the Action method. Let's now write a ValueProvider to ensure this parameter gets populated. We'll start by creating a new class called StaticValueProvider. When the class is created, we'll update the using statements to ensure that they include the following: using System.Collections.Specialized; using System.Globalization; using System.Web.Mvc; With the appropriate using statements in place, we'll update the StaticValueProvider class to implement the IValueProvider interface. The System.Web.Mvc library already contains a pretty flexible dictionary-like implementation called NameValueCollectionValueProvider, so we'll just wrap that and let it do most of the real work for us. The completed class looks like: public class StaticValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider;     public StaticValueProvider(ControllerContext controllerContext)     {         var parameters = new NameValueCollection();         parameters.Add("welcomeMessage", "Hello from the value provider!");         _wrappedProvider = new NameValueCollectionValueProvider(parameters, CultureInfo.InvariantCulture);     }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } Notice that the only entry in the collection matches the name of the argument to our HomeController's Index action.  This is the important "secret sauce" that will make things work. We've got our new value provider now, but that's not quite enough to be finished. Mvc obtains IValueProvider instances using factories that are registered when the application starts up. These factories extend the abstract ValueProviderFactory class by initializing and returning the appropriate implementation of IValueProvider from the GetValueProvider method. While I wouldn't do so in production code, for the sake of this example, I'm going to add the following class definition within the StaticValueProvider.cs source file: public class StaticValueProviderFactory : ValueProviderFactory {     public override IValueProvider GetValueProvider(ControllerContext controllerContext)     {         return new StaticValueProvider(controllerContext);     } } Now that we have a factory, we can register it by adding the following line to the end of the Application_Start method in Global.asax.cs: ValueProviderFactories.Factories.Add(new StaticValueProviderFactory()); If you've done everything right to this point, you should be able to run the application and be presented with the home page reading "Hello from the value provider!". Now that you have the basics of the IValueProvider down, you have everything you need to enhance your Sitecore MVC implementation by adding an IValueProvider that exposes values from the ambient RenderingContext's Parameters property. I'll provide the code for the IValueProvider implementation (which should look VERY familiar) and you can use the work we've already done as a reference to create and register the factory: public class RenderingContextValueProvider : IValueProvider {     private NameValueCollectionValueProvider _wrappedProvider = null;     public RenderingContextValueProvider(ControllerContext controllerContext)     {         var collection = new NameValueCollection();         var rc = RenderingContext.CurrentOrNull;         if (rc != null && rc.Rendering != null)         {             foreach(var parameter in rc.Rendering.Parameters)             {                 collection.Add(parameter.Key, parameter.Value);             }         }         _wrappedProvider = new NameValueCollectionValueProvider(collection, CultureInfo.InvariantCulture);         }     public bool ContainsPrefix(string prefix)     {         return _wrappedProvider.ContainsPrefix(prefix);     }     public ValueProviderResult GetValue(string key)     {         return _wrappedProvider.GetValue(key);     } } In this post I've discussed the MVC IValueProvider used to map data to controller action method arguments and how this can be integrated into your Sitecore 6.6 MVC solution.

    Read the article

  • Trouble with site-to-site OpenVPN & pfSense not passing traffic

    - by JohnCC
    I'm trying to get an OpenVPN tunnel going on pfSense 1.2.3-RELEASE running on embedded routers. I have a local LAN 10.34.43.0/254. The remote LAN is 10.200.1.0/24. The local pfSense is configured as the client, and the remote is configured as the server. My OpenVPN tunnel is using the IP range 10.99.89.0/24 internally. There are also some additional LANs on the remote side routed through the tunnel, but the issue is not with those since my connectivity fails before that point in the chain. The tunnel comes up fine and the logs look healthy. What I find is this:- I can ping and telnet to the remote LAN and the additional remote LANs from the local pfSense box's shell. I cannot ping or telnet to any remote LANs from the local network. I cannot ping or telnet to the local network from the remote LAN or the remote pfSense box's shell. If I tcpdump the tun interfaces on both sides and ping from the local LAN, I see the packets hit the tunnel locally, but they do not appear on the remote side (nor do they appear on the remote LAN interface if I tcpdump that). If I tcpdump the tun interfaces on both sides and ping from the local pfSense shell, I see the packets hit the tunnel locally, and exit the remote side. I can also tcpdump the remote LAN interface and see them pass there too. If I tcpdump the tun interfaces on both sides and ping from the remote pfSense shell, I see the packets hit the remote tun but they do not emerge from the local one. Here is the config file the remote side is using:- #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure server 10.99.89.0 255.255.255.0 client-config-dir /var/etc/openvpn_csc push "route 10.200.1.0 255.255.255.0" lport <port> route 10.34.43.0 255.255.255.0 ca /var/etc/openvpn_server0.ca cert /var/etc/openvpn_server0.cert key /var/etc/openvpn_server0.key dh /var/etc/openvpn_server0.dh comp-lzo push "route 205.217.5.128 255.255.255.224" push "route 205.217.5.64 255.255.255.224" push "route 165.193.147.128 255.255.255.224" push "route 165.193.147.32 255.255.255.240" push "route 192.168.1.16 255.255.255.240" push "route 192.168.2.16 255.255.255.240" Here is the local config:- writepid /var/run/openvpn_client0.pid #user nobody #group nobody daemon keepalive 10 60 ping-timer-rem persist-tun persist-key dev tun proto udp cipher BF-CBC up /etc/rc.filter_configure down /etc/rc.filter_configure remote <host> <port> client lport 1194 ifconfig 10.99.89.2 10.99.89.1 ca /var/etc/openvpn_client0.ca cert /var/etc/openvpn_client0.cert key /var/etc/openvpn_client0.key comp-lzo You can see the relevant parts of the routing tables extracted from pfSense here http://pastie.org/5365800 The local firewall permits all ICMP from the LAN, and my PC is allowed everything to anywhere. The remote firewall treats its LAN as trusted and permits all traffic on that interface. Can anyone suggest why this is not working, and what I could try next?

    Read the article

  • Exception Handling Differences Between 32/64 Bit

    - by Alois Kraus
    I do quite a bit of debugging .NET applications but from time to time I see things that are impossible (at a first look). I may ask you dear reader what your mental exception handling model is. Exception handling is easy after all right? Lets suppose the following code:         private void F1(object sender, EventArgs e)         {             try             {                 F2();             }             catch (Exception ex)             {                 throw new Exception("even worse Exception");             }           }           private void F2()         {             try             {                 F3();             }             finally             {                 throw new Exception("other exception");             }         }           private void F3()         {             throw new NotImplementedException();         }   What will the call stack look like when you break into the catch(Exception) clause in Windbg (32 and 64 bit on .NET 3.5 SP1)? The mental model I have is that when an exception is thrown the stack frames are unwound until the catch handler can execute. An exception does propagate the call chain upwards.   So when F3 does throw an exception the control flow will resume at the finally handler in F2 which does throw another exception hiding the original one (that is nasty) and then the new Exception will be catched in F1 where the catch handler is executed. So we should see in the catch handler in F1 as call stack only the F1 stack frame right? Well lets try it out in Windbg. For this I created a simple Windows Forms application with one button which does execute the F1 method in its click handler. When you compile the application for 64 bit and the catch handler is reached you will find with the following commands in Windbg   Load sos extension from the same path where mscorwks was loaded in the current process .loadby sos mscorwks   Beak on clr exceptions sxe clr   Continue execution g   Dump mixed call stack container C++  and .NET Stacks interleaved 0:000> !DumpStack OS Thread Id: 0x1d8 (0) Child-SP         RetAddr          Call Site 00000000002c88c0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002c8990 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002c8a60 000007ff005dd7f4 mscorwks!JIT_Throw+0x130 00000000002c8c10 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0xb4 00000000002c8c60 000007fefa661012 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002c8d60 000007fefa711a72 mscorwks!ExceptionTracker::CallCatchHandler+0x9e 00000000002c8df0 0000000077b055cd mscorwks!ProcessCLRException+0x25e 00000000002c8e90 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002c8ec0 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002c9560 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002c9a70 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002c9b10 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002c9b40 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002ca220 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002ca7e0 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002ca8b0 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002ca980 000007ff005dd8df mscorwks!JIT_Throw+0x130 00000000002cab30 000007fefa6942e1 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x9f 00000000002cab80 000007fefa71b5b3 mscorwks!ExceptionTracker::CallHandler+0x145 00000000002cac80 000007fefa70dcd0 mscorwks!ExceptionTracker::ProcessManagedCallFrame+0x683 00000000002caed0 000007fefa7119af mscorwks!ExceptionTracker::ProcessOSExceptionNotification+0x430 00000000002cbd90 0000000077b055cd mscorwks!ProcessCLRException+0x19b 00000000002cbe30 0000000077ae55f8 ntdll!RtlpExecuteHandlerForUnwind+0xd 00000000002cbe60 000007fefa637c1a ntdll!RtlUnwindEx+0x539 00000000002cc500 000007fefa711a21 mscorwks!ClrUnwindEx+0x36 00000000002cca10 0000000077b0554d mscorwks!ProcessCLRException+0x20d 00000000002ccab0 0000000077ae5d1c ntdll!RtlpExecuteHandlerForException+0xd 00000000002ccae0 0000000077b1fe48 ntdll!RtlDispatchException+0x3cb 00000000002cd1c0 000007fefdaeaa7d ntdll!KiUserExceptionDispatcher+0x2e 00000000002cd780 000007fefa68f0bd KERNELBASE!RaiseException+0x39 00000000002cd850 000007fefac42ed0 mscorwks!RaiseTheExceptionInternalOnly+0x295 00000000002cd920 000007ff005dd968 mscorwks!JIT_Throw+0x130 00000000002cdad0 000007ff005dd875 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F3()+0x48 00000000002cdb10 000007ff005dd786 WindowsFormsApplication1!WindowsFormsApplication1.Form1.F2()+0x35 00000000002cdb60 000007ff005dbe6a WindowsFormsApplication1!WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)+0x46 00000000002cdbc0 000007ff005dd452 System_Windows_Forms!System.Windows.Forms.Control.OnClick(System.EventArgs)+0x5a   Hm okaaay. I see my method F1 two times in this call stack. Looks like we did get some recursion bug. But that can´t be given the obvious code above. Let´s try the same thing in a 32 bit process.  0:000> !DumpStack OS Thread Id: 0x33e4 (0) Current frame: KERNELBASE!RaiseException+0x58 ChildEBP RetAddr  Caller,Callee 0028ed38 767db727 KERNELBASE!RaiseException+0x58, calling ntdll!RtlRaiseException 0028ed4c 68b9008c mscorwks!Binder::RawGetClass+0x20, calling mscorwks!Module::LookupTypeDef 0028ed5c 68b904ff mscorwks!Binder::IsClass+0x23, calling mscorwks!Binder::RawGetClass 0028ed68 68bfb96f mscorwks!Binder::IsException+0x14, calling mscorwks!Binder::IsClass 0028ed78 68bfb996 mscorwks!IsExceptionOfType+0x23, calling mscorwks!Binder::IsException 0028ed80 68bfbb1c mscorwks!RaiseTheExceptionInternalOnly+0x2a8, calling KERNEL32!RaiseExceptionStub 0028eda8 68ba0713 mscorwks!Module::ResolveStringRef+0xe0, calling mscorwks!BaseDomain::GetStringObjRefPtrFromUnicodeString 0028edc8 68b91e8d mscorwks!SetObjectReferenceUnchecked+0x19 0028ede0 68c8e910 mscorwks!JIT_Throw+0xfc, calling mscorwks!RaiseTheExceptionInternalOnly 0028ee44 68c8e734 mscorwks!JIT_StrCns+0x22, calling mscorwks!LazyMachStateCaptureState 0028ee54 68c8e865 mscorwks!JIT_Throw+0x1e, calling mscorwks!LazyMachStateCaptureState 0028eea4 02ffaecd (MethodDesc 0x7af08c +0x7d WindowsFormsApplication1.Form1.F1(System.Object, System.EventArgs)), calling mscorwks!JIT_Throw 0028eeec 02ffaf19 (MethodDesc 0x7af098 +0x29 WindowsFormsApplication1.Form1.F2()), calling 06370634 0028ef58 02ffae37 (MethodDesc 0x7a7bb0 +0x4f System.Windows.Forms.Control.OnClick(System.EventArgs))   That does look more familar. The call stack has been unwound and we do see only some frames into the history where the debugger was smart enough to find out that we have called F2 from F1. The exception handling on 64 bit systems does work quite differently which seems to have the nice property to remember the called methods not only during the first pass of exception filter clauses (during first pass all catch handler are called if they are going to catch the exception which is about to be thrown)  but also when the actual stack unwind has taken place. This makes it possible to follow not only the call stack right at the moment but also to look into the “history” of the catch/finally clauses. In a 64 bit process you only need to look at the ExceptionTracker to find out if a catch or finally handler was called. The two frames ProcessManagedCallFrame/CallHandler does indicate a finally clause whereas CallCatchHandler/CallHandler indicates a catch clause. That was a interesting one. Oh and by the way if you manage to load the Microsoft symbols you can also find out the hidden exception which. When you encounter in the call stack a line 0016eb34 75b79617 KERNELBASE!RaiseException+0x58 ====> Exception Code e0434f4d cxr@16e850 exr@16e838 Then it is a good idea to execute .exr 16e838 !analyze –v to find out more. In the managed world it is even easier since we can dump the objects allocated on the stack which have not yet been garbage collected to look at former method parameters. The command !dso which is the abbreviation for dump stack objects will give you 0:000> !dso OS Thread Id: 0x46c (0) ESP/REG  Object   Name 0016dd4c 020737f0 System.Exception 0016dd98 020737f0 System.Exception 0016dda8 01f5c6cc System.Windows.Forms.Button 0016ddac 01f5d2b8 System.EventHandler 0016ddb0 02071744 System.Windows.Forms.MouseEventArgs 0016ddc0 01f5d2b8 System.EventHandler 0016ddcc 01f5c6cc System.Windows.Forms.Button 0016dddc 020737f0 System.Exception 0016dde4 01f5d2b8 System.EventHandler 0016ddec 02071744 System.Windows.Forms.MouseEventArgs 0016de40 020737f0 System.Exception 0016de80 02071744 System.Windows.Forms.MouseEventArgs 0016de8c 01f5d2b8 System.EventHandler 0016de90 01f5c6cc System.Windows.Forms.Button 0016df10 02073784 System.SByte[] 0016df5c 02073684 System.NotImplementedException 0016e2a0 02073684 System.NotImplementedException 0016e2e8 01ed69f4 System.Resources.ResourceManager From there it is easy to do 0:000> !pe 02073684 Exception object: 02073684 Exception type: System.NotImplementedException Message: Die Methode oder der Vorgang sind nicht implementiert. InnerException: <none> StackTrace (generated):     SP       IP       Function     0016ECB0 006904AD WindowsFormsApplication2!WindowsFormsApplication2.Form1.F3()+0x35     0016ECC0 00690411 WindowsFormsApplication2!WindowsFormsApplication2.Form1.F2()+0x29     0016ECF0 0069038F WindowsFormsApplication2!WindowsFormsApplication2.Form1.F1(System.Object, System.EventArgs)+0x3f StackTraceString: <none> HResult: 80004001 to see the former exception. That´s all for today.

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • Windows 8 Launch&ndash;Why OEM and Retailers Should STFU

    - by D'Arcy Lussier
    Microsoft has gotten a lot of flack for the Surface from OEM/hardware partners who create Windows-based devices and I’m sure, to an extent, retailers who normally stock and sell Windows-based devices. I mean we all know how this is supposed to work – Microsoft makes the OS, partners make the hardware, retailers sell the hardware. Now Microsoft is breaking the rules by not only offering their own hardware but selling them via online and through their Microsoft branded stores! The thought has been that Microsoft is trying to set a standard for the other hardware companies to reach for. Maybe. I hope, at some level, Microsoft may be covertly responding to frustrations associated with trusting the OEMs and Retailers to deliver on their part of the supply chain. I know as a consumer, I’m very frustrated with the Windows 8 launch. Aside from the Surface sales, there’s nothing happening at the retail level. Let me back up and explain. Over the weekend I visited a number of stores in hopes of trying out various Windows 8 devices. Out of three retailers (Staples, Best Buy, and Future Shop), not *one* met my expectations. Let me be honest with you Staples, I never really have high expectations from your computer department. If I need paper or pens, whatever, but computers – you’re not the top of my list for price or selection. Still, considering you flaunted Win 8 devices in your flyer I expected *something* – some sign of effort that you took the Windows 8 launch seriously. As I entered the 1910 Pembina Highway location in Winnipeg, there was nothing – no signage, no banners – nothing that would suggest Windows 8 had even launched. I made my way to the laptops. I had to play with each machine to determine which ones were running Windows 8. There wasn’t anything on the placards that made it obvious which were Windows 8 machines and which ones were Windows 7. Likewise, there was no easy way to identify the touch screen laptop (the HP model) from the others without physically touching the screen to verify. Horrible experience. In the same mall as the Staples I mentioned above, there’s a Future Shop. Surely they would be more on the ball. I walked in to the 1910 Pembina Highway location and immediately realized I would not get a better experience. Except for the sign by the front door mentioning Windows 8, there was *nothing* in the computer department pointing you to the Windows 8 devices. Like in Staples, the Win 8 laptops were mixed in with the Win 7 ones and there was nothing notable calling out which ones were running Win 8. I happened to hit up the St. James Street location today, thinking since its a busier store they must have more options. To their credit, they did have two staff members decked out in Windows 8 shirts and who were helping a customer understand Windows 8. But otherwise, there was nothing highlighting the Windows 8 devices and they were again mixed in with the rest of the Win 7 machines. Finally, we have the St. James Street Best Buy location here in Winnipeg. I’m sure Best Buy will have their act together. Nope, not even close. Same story as the others: minimal signage (there was a sign as you walked in with a link to this schedule of demo days), Windows 8 hardware mixed with the rest of the PC offerings, and no visible call-outs identifying which were Win 8 based. This meant that, like Future Shop and Staples, if you wanted to know which machine had Windows 8 you had to go and scrutinize each machine. Also, there was nothing identifying which ones were touch based and which were not. Just Another Day… To these retailers, it seemed that the Windows 8 launch was just another day, with another product to add to the showroom floor. Meanwhile, Apple has their dedicated areas *in all three stores*. It was dead simple to find where the Apple products were compared to the Windows 8 products. No wonder Microsoft is starting to push their own retail stores. No wonder Microsoft is trying to funnel orders through them instead of relying on these bloated retail big box stores who obviously can’t manage a product launch. It’s Not Just The Retailers… Remember when the Acer CEO, Founder, and President of Computer Global Operations all weighed in on how Microsoft releasing the Surface would have a “huge negative impact for the ecosystem and other brands may take a negative reaction”? Also remember the CEO stating “[making hardware] is not something you are good at so please think twice”? Well the launch day has come and gone, and so far Microsoft is the only one that delivered on having hardware available on the October 26th date. Oh sure, there are laptops running Windows 8 – but all in one desktop PCs? I’ve only seen one or two! And tablets are *non existent*, with some showing an early to late November availability on Best Buy’s website! So while the retailers could be doing more to make it easier to find Windows 8 devices, the manufacturers could help by *getting devices into stores*! That’s supposedly something that these companies are good at, according to the Acer CEO. So Here’s What the Retailers and Manufacturers Need To Do… Get Product Out The pivotal timeframe will be now to the end of November. We need to start seeing all these fantastic pieces of hardware ship – including the Samsung ATIV Smart PC Pro, the Acer Iconia, the Asus TAICHI 21, and the sexy Samsung Series 7 27” desktop. It’s not enough to see product announcements, we need to see actual devices. Make It Easy For Customers To Find Win8 Devices You want to make it easy to sell these things? Make it easy for people to find them! Have staff on hand that really know how these devices run and what can be done with them. Don’t just have a single demo day, have people who can demo it every day! Make It Easy to See the Features There’s touch screen desktops, touch screen laptops, tablets, non-touch laptops, etc. People need to easily find the features for each machine. If I’m looking for a touch-laptop, I shouldn’t need to sift through all the non-touch laptops to find them – at the least, I need to quickly be able to see which ones are touch. I feel silly even typing this because this should be retail 101 and I have no retail background (but I do have an extensive background as a customer). In Summary… Microsoft launching the Surface and selling them through their own channels isn’t slapping its OEM and retail partners in the face; its slapping them to wake the hell up and stop coasting through Windows launch events like they don’t matter. Unless I see some improvements from vendors and retailers in November, I may just hold onto my money for a Surface Pro even if I have to wait until early 2013. Your move OEM/Retailers. *Update – While my experience has been in Winnipeg, similar experiences have been voiced from colleagues in Calgary and Edmonton.

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • 3D picking lwjgl

    - by Wirde
    I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.) I posted this at stackoverflow at first but after researching some more in to my problem i found this neat site and tought that you guys might be more qualified to answer this question. This is how the code looks like: if(Mouse.getEventButton() == 1) { if (!Mouse.getEventButtonState()) { Camera.get().generateViewMatrix(); float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio(); float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f)); float displacementRate = (float)Math.tan(Camera.get().getFovy()/2); screenSpaceX *= displacementRate; screenSpaceY *= displacementRate; Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1); Matrix4f tmpView = new Matrix4f(); Camera.get().getViewMatrix().transpose(tmpView); Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar); Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); Ray clickRay = new Ray(rayPosition, rayDirection); Vector tMin = new Vector(), tMax = new Vector(), tempPoint; float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f; Drawable closestDrawableHit = null; for(Drawable d : this.worldModel.getDrawableThings()) { // Calcualte AABB for each object... needs to be moved later... firstVertex = true; for(Surface surface : d.getSurfaces()) { for(Vertex v : surface.getVertices()) { worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x; worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y; worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z; worldPosition = worldPosition.rotate(d.getRotation()); if (firstVertex) { maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z; minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z; firstVertex = false; } else { if (worldPosition.x > maxX) { maxX = worldPosition.x; } if (worldPosition.x < minX) { minX = worldPosition.x; } if (worldPosition.y > maxY) { maxY = worldPosition.y; } if (worldPosition.y < minY) { minY = worldPosition.y; } if (worldPosition.z > maxZ) { maxZ = worldPosition.z; } if (worldPosition.z < minZ) { minZ = worldPosition.z; } } } } // ray/slabs intersection test... // clickRay.getOrigin().x + clickRay.getDirection().x * f = minX // clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f // clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f // -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y; if(largestEnteringValue < temp) { largestEnteringValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z; if(largestEnteringValue < temp) { largestEnteringValue = temp; } smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x; temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y; if(smallestExitingValue > temp) { smallestExitingValue = temp; } temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z; if(smallestExitingValue < temp) { smallestExitingValue = temp; } if(largestEnteringValue > smallestExitingValue) { //System.out.println("Miss!"); } else { if (largestEnteringValue < closestEnteringValue) { closestEnteringValue = largestEnteringValue; closestDrawableHit = d; } } } if(closestDrawableHit != null) { System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z); this.worldModel.removeDrawableThing(closestDrawableHit); } } } I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all. Edit: Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center (nearer or further away depending on where on the screen im clicking) and always shots to the same position no matter where I direct my camera... My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the gluLookAt method in lwjgl; I have to build it my self and I guess thats where the problem is at)... Edit2: This is how i calculate it currently: private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}}; public Vector getCameraDirectionVector() { Vector actualEye = this.getActualEyePosition(); return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z); } public Vector getActualEyePosition() { return eye.rotate(this.getRotation()); } public void generateViewMatrix() { Vector cameraDirectionVector = getCameraDirectionVector().normalize(); Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize(); Vector up = Vector.cross(side, cameraDirectionVector); viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x; viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y; viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z; /* Vector actualEyePosition = this.getActualEyePosition(); Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize(); Vector xaxis = Vector.cross(upVector, zaxis).normalize(); Vector yaxis = Vector.cross(zaxis, xaxis); viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x; viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y; viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z; viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition); */ viewMatrix = new Matrix4f(); viewMatrix.load(getViewMatrixAsFloatBuffer()); } Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it... I have read alot of threads and documentations about this but i can't seam to wrapp my head around it... Edit3: Okay with the help of Byte56 (thanks alot for the help) i have now concluded that it's not the viewMatrix that is the problem... I still get the same messedup result; anyone that think that they can find the error in my code, i certenly can't, have bean working on this for 3 days now :(

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don't Wait Much Longer

    - by Kerrie Foy
    Depending on severity, product compliance issues can cause all sorts of problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision, or indecision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS) is a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment.  ROHS was originally adopted by the European Union in 2003 for implementation in 2006, and it has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey.  In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014.   Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives.  Additional evolving regulations are coming from governing bodies like the Food and Drug Administration (FDA) and the International Organization for Standardization (ISO). Corporate sustainability initiatives are also gaining urgency and influencing product design. In a survey of 405 corporations in the Global 500 by Carbon Disclosure Project, co-written by PwC (CDP Global 500 Climate Change Report 2012 entitled Business Resilience in an Uncertain, Resource-Constrained World), 48% of the respondents indicated they saw potential to create new products and business services as a response to climate change. Just 21% reported a dedicated budget for the research. However, the report goes on to explain that those few companies are winning over new customers and driving additional profits by exploiting their abilities to adapt to environmental needs. The article cites Dell as an example – Dell has invested in research to develop new products designed to reduce its customers’ emissions by more than 10 million metric tons of CO2e per year. This reduction in emissions should save Dell’s customers over $1billion per year as a result! Over time we expect to see many additional companies prove that eco-design provides marketplace benefits through differentiation and direct customer value. How do you meet compliance requirements and also successfully invest in eco-friendly designs? No doubt companies struggle to answer this question. After all, the journey to get there may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes per the rapidly evolving global and regional directives. There may be limited executive focus on the initiative, inability to quantify noncompliance, or not enough resources to justify investment. To make things even more difficult to address, compliance responsibility can be a passionate topic within an organization, making the prospect of change on an enterprise scale problematic and time-consuming. Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming to an organization. With all the overhead, certain markets or demographics become simply inaccessible. Therefore, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and even thrive in today’s highly regulated and transparent environment by implementing systematic approaches to product compliance that are more than functional bandages but revenue-generating engines. Consider partnering with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase. Agile PG&C makes it possible to efficiently manage compliance per corporate green initiatives as well as regional and global directives. Options are critical, but so is ease-of-use. Anyone who’s grappled with compliance policy knows legal interpretation plays a major role in determining how an organization responds to regulation. Agile PG&C gives you the freedom to configure product compliance per your needs, while maintaining rigorous control over the product record in an easy-to-use interface that facilitates adoption efforts. It allows you to assign regulations as specifications for a part or BOM roll-up. Each specification has a threshold value that alerts you to a non-compliance issue if the threshold value is exceeded. Set however many regulations as specifications you need to make sure a product can be sold in your target countries. Another option is to implement like one of our leading consumer electronics customers and define your own “catch-all” specification to ensure compliance in all markets. You can give your suppliers secure access to enter their component data or integrate a third party’s data. With Agile PG&C you are able to design compliance earlier into your products to reduce cost and improve quality downstream when stakes are higher. Agile PG&C is a comprehensive solution that makes product compliance more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise, systematic approach to product compliance is a competitive investment. From the start, Agile Product Governance & Compliance enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738. Many thanks to Shane Goodwin, Senior Manager, Oracle Agile PLM Product Management, for contributions to this article. 

    Read the article

  • SQL SERVER – Create a Very First Report with the Report Wizard

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. What is the report Wizard? In today’s world automation is all around you. Henry Ford began building his Model T automobiles on a moving assembly line a century ago and changed the world. The moving assembly line allowed Ford to build identical cars quickly and cheaply. Henry Ford said in his autobiography “Any customer can have a car painted any color that he wants so long as it is black.” Today you can buy a car straight from the factory with your choice of several colors and with many options like back up cameras, built-in navigation systems and heated leather seats. The assembly lines now use robots to perform some tasks along with human workers. When you order your new car, if you want something special, not offered by the manufacturer, you will have to find a way to add it later. In computer software, we also have “assembly lines” called wizards. A wizard will ask you a series of questions, often branching to specific questions based on earlier answers, until you get to the end of the wizard. These wizards are used for many things, from something simple like setting up a rule in Outlook to performing administrative tasks on a server. Often, a wizard will get you part of the way to the end result, enough to get much of the tedious work out of the way. Once you get the product from the wizard, if the wizard is not capable of doing something you need, you can tweak the results. Create a Report with the Report Wizard Let’s get started with your first report!  Launch SQL Server Data Tools (SSDT) from the Start menu under SQL Server 2012. Once SSDT is running, click New Project to launch the New Project dialog box. On the left side of the screen expand Business Intelligence and select Reporting Services. Configure the properties as shown in . Be sure to select Report Server Project Wizard as the type of report and to save the project in the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Project folder. Click OK and wait for the Report Wizard to launch. Click Next on the Welcome screen.  On the Select the Data Source screen, make sure that New data source is selected. Type JProCo as the data source name. Make sure that Microsoft SQL Server is selected in the Type dropdown. Click Edit to configure the connection string on the Connection Properties dialog box. If your SQL Server database server is installed on your local computer, type in localhost for the Server name and select the JProCo database from the Select or enter a database name dropdown. Click OK to dismiss the Connection Properties dialog box. Check Make this a shared data source and click Next. On the Design the Query screen, you can use the query builder to build a query if you wish. Since this post is not meant to teach you T-SQL queries, you will copy all queries from files that have been provided for you. In the C:\Joes2Pros\SSRSCompanionFiles\Chapter3\Resources folder open the sales by employee.sql file. Copy and paste the code from the file into the Query string Text Box. Click Next. On the Select the Report Type screen, choose Tabular and click Next. On the Design the Table screen, you have to figure out the groupings of the report. How do you do this? Well, you often need to know a bit about the data and report requirements. I often draw the report out on paper first to help me determine the groups. In the case of this report, I could group the data several ways. Do I want to see the data grouped by Year and Month? Do I want to see the data grouped by Employee or Category? The only thing I know for sure about this ahead of time is that the TotalSales goes in the Details section. Let’s assume that the CIO asked to see the data grouped first by Year and Month, then by Category. Let’s move the fields to the right-hand side. This is done by selecting Page > Group or Details >, as shown in, and click Next. On the Choose the Table Layout screen, select Stepped and check Include subtotals and Enable drilldown, as shown in. On the Choose the Style screen, choose any color scheme you wish (unlike the Model T) and click Next. I chose the default, Slate. On the Choose the Deployment Location screen, change the Deployment folder to Chapter 3 and click Next. At the Completing the Wizard screen, name your report Employee Sales and click Finish. After clicking Finish, the report and a shared data source will appear in the Solution Explorer and the report will also be visible in Design view. Click the Preview tab at the top. This report expects the user to supply a year which the report will then use as a filter. Type in a year between 2006 and 2013 and click View Report. Click the plus sign next to the Sales Year to expand the report to see the months, then expand again to see the categories and finally the details. You now have the assembly line report completed, and you probably already have some ideas on how to improve the report. Tomorrow’s Post Tomorrow’s blog post will show how to create your own data sources and data sets in SSRS. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • Add a small RAID card? Will it help overall stability and performance of my nine hard drives?

    - by Ray
    Hi, Will I get any extra genuine added performance and RAID stability if I insert a basic RAID card into a PCI-E x1 slot? I am considering the Adaptec 1220SA - 2 port SATA , pci-express (1x) , raid 0/1. Ok it only supports two SATA drives. Purpose is to help support the eight internal hard drives (1TB each), a DVD drive and an external e-SATA connected 2TB hard drive - by dealing with two of the internal hard drives. My current configuration of eight internal 1TB Barracuda (7200.12) SATA hard drives, one external 2TB SATA Western Digital Green Drive (e-SATA) and one DVD drive can already be supported by the Intel P55 & JMicron controllers on the ASUS motherboard : the Intel P55 (controls six HDD; configured as three x RAID 1), and the JMicron (controls two HDD as one RAID 1, as well as the DVD drive and the external SATA drive via the motherboard's e-SATA port (controlled by the JMicron)). Bigger picture details : I have an ASUS motherboard designed for the LGA1156 type processor and it includes the Intel P55 Express Chipset and JMicron. I am using the Intel Core i7-870 processor, and have 8GB DDR3 (1333) memory (four x 2GB Corsair DIMMs). Enough overall power. The power supply is more than sufficicient for the system. Corsair AX850. The system will never need the full 850 watts (future : second graphics card). The RAID card would provide hardware RAID 1 for two of the eight intrnal drives. It would either reduce the load on : the Intel P55 firmware RAID support, or replace the JMicron controller's RAID 1 set. I am busy installing the above configuration using Windows 7 Ultimate 64-bit as the OS. The RAID card is a last minute addition to the plan. Is it worth spending the extra R700 - R900 on the Adaptec 1220SA, or equivalent RAID card? I cannot afford to spend yet another R2000 - R3000 on a RAID card that would support many SATA2 hard drives, with a better RAID, example the RAID 5. My Issue & assumption : I am trusting that the Intel P55 chipset can properly handle six drives, configured as three * RAID 1. I am assuming that the JMicron can handle, using its RED SATA ports, one RAID-1 (two HDDs). The DVD drive connects to the JMicron optical SATA port 1 (white port 1). White port 2 is not used. The e-SATA connection is from the JMicron straight to, and through the motherboard - to an on-board (rear panel) e-SATA port. Am I being a little hopeful in only using the on-board Intel P55 and the JMicron? Is it a waste of money to install a RAID card that handles two SATA2 drives? OR Is it wisdom to take the pressure a little off the Intel P55? Obviously I am interested in data security, hence RAID 1, not RAID Zero. RAID 5 would be nice. The CPU, Intel Core i7-870 will provide the clout. Context to nine drives : I am using virtualisation with Windows 7 Ultimate. Bootable VMs. The operating system gets a mirror. Loaded apps gets a mirror. The current design data is kept in another mirror and Another mirror is back-up one and / or VM territory. Then the external 2TB drive (via e-SATA) is the next layer of data security and then finally, I use off-site data security. Thanks.

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >