Search Results

Search found 1482 results on 60 pages for 'wild bill'.

Page 50/60 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Transferring domain from one registrar to another

    - by Macha
    I have a domain from my old web host, which was free with my hosting account. After a few years, I am moving to a VPS. Most of my other domains were registered with Namecheap, so it was just a matter of changing a few DNS records. However, given that my old host does not provide me with a DNS control panel, and I don't want to be paying a full hosting bill for just domains, I'm now looking into transferring it. My old host says there will be a charge of $15 to them. NameCheap's page seems to imply you don't need the current registrar to do anything, but it also seems to be based on sending an email to the one listed in whois. Of course, my old host have whoisguard on the domain so the only email on it is [email protected] (and not a unique [email protected], just [email protected]) which doesn't go to me. Again, there doesn't seem to be an option to disable this. So, is it a case of paying my old host's fee, and paying again for the domain from NameCheap, or is there some other way to transfer my domain? (I'm not really sure which of the trilogy sites this is best for.)

    Read the article

  • Linux file server for an inexperienced admin

    - by Pat
    A charity I volunteer for wants a file server for their mostly Windows machines (about five XP and 7 machines, with some Mac laptops every now and then). For the server, I have a PC with an Intel Core 2 Duo 3GHz proc, 4GB of DDR2 400MHz RAM, and a 500 GB HDD. (I should point out that they do not currently have any server - they are just using Windows to share a folder on one of the PCs.) What is a linux distro that is easy to configure for Windows file serving yet stable and secure enough to protect sensitive data without an expert sysadmin? I'm guessing that a Debian distro would probably fit the security bill, but I don't know of any tailored to novice sysadmins. Also, are there any killer apps for making this easy to administer and set up (as a Windows file server, in particular - this answer is a good example)? Would FreeNAS be sufficient? Once it's all set up, what are the minimum measures I need to take to keep the data secure? I found this somewhat helpful answer, but it's not specific to my question of just getting a secure file server up, running, and maintained.

    Read the article

  • Recommendation for a simple no-frills Windows PDF printer driver?

    - by Scott Bussinger
    I'm looking for an extremely simple Windows PDF printer driver that I can recommend to clients. Ideally it would have these characteristics: When you print something, it should just create it as a temporary file and then display it in their default PDF viewer with no prompting. If they want to save it, they can save it manually from inside the viewer. This workflow should be with no special post-install configuration. Installation should be very simple. A double click the installation program and click "Finish" sort of thing. No complicated multi-step installation, no asking questions your grandmother wouldn't know the answer to (preferably no questions at all), no extra crap being installed. An option for a completely silent installation would be nice, but not necessary. Ideally it would be free to simplify their installing on a small network, but low cost is an option. I've tried a quite a few but none really fit the bill. Some can achieve the first goal but only after careful configuration, some try to install extra toolbars, some have other installation complexities that would make it hard for extremely novice users to succeed. Any suggestions? Thanks!

    Read the article

  • Custom PCI bracket with support, for custom PCB?

    - by newbiez
    I am considering to put a custom PCB card that I made, into my computer. It won't go on any PCI connector, it plugs in on a USB connector on the motherboard, via a ribbon cable. I need thou to plug a device to it; which means that either I leave the PCB outside the case, hanging by the ribbon (bad idea), or I could put it in a PCI slot, using a bracket. The issue is that the brackets that I have, do not have tabs, so I have no way to screw the PCB on them. I was hoping to find something that would allow me to put the PCB on it, and then just fit it in the PCI bracket opening, like this: http://www.idotpc.com/TheStore/pc/viewPrd.asp?idproduct=1203&idcategory=0 This one won't fit the bill since the holes are too close apart, compared to the one that I have already on the PCB (and can't make more holes). Do you know if there is a place where they make universal PCI bracket mounting systems for custom PCB? I just need one, so can't even order a custom one (they ask me 120 dollars for one). Thanks in advance!

    Read the article

  • Verizon overbilling me ?

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • VzAccess Manager.

    - by bek
    I have verizon air card for my internet. Which is called vz manager. I have had it for about a year. Keeping within my usage allowance the whole time until 3 months ago, when my bill came in my 5,000 g allowance was ran to 18,000. It has continued to do so through the last 3 months. My usage reset itself yesterday for the new month. I got on google for about 5 minutes and chatted for about an hour yesterday online. Mind you that I have never done anything any different then i do everyday. I do not download music or watch videos on my computer, nothing like that. Well since last night a 8 pm my usage sayd 1109.525 gb. ALREADY! For being on the internet for an hour with no downloads. What could be causing this, please thrown me some ideas. Verizon is checking on the problem, but that usually doesnt get me the answer i want. Can someone be hacking my card and using internet through it, has told me that that is not possible, however I think with the internet anytihng is possible.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Month in Geek: December 2010 Edition

    - by Asian Angel
    As 2010 draws to a close, we have gathered together another great batch of article goodness for your reading enjoyment. Here are our ten hottest articles for December. Note: Articles are listed as #10 through #1. The 50 Best How-To Geek Windows Articles of 2010 Even though we cover plenty of other topics, Windows has always been a primary focus around here, and we’ve got one of the largest collections of Windows-related how-to articles anywhere. Here’s the fifty best Windows articles that we wrote in 2010. Read the article Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition] As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Read the article LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology With image technology progressing faster than ever, High-Def has become the standard, giving TV buyers more options at cheaper prices. But what’s different in all these confusing TVs, and what should you know before buying one? Read the article HTG Explains: Which Linux File System Should You Choose? File systems are one of the layers beneath your operating system that you don’t think about—unless you’re faced with the plethora of options in Linux. Here’s how to make an educated decision on which file system to use. Read the article Desktop Fun: Merry Christmas Fonts Christmas will soon be here and there are lots of cards, invitations, gift tags, photos, and more to prepare beforehand. To help you get ready we have gathered together a great collection of fun holiday fonts to help turn those ordinary looking holiday items into extraordinary looking ones. Read the article Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Read the article 20 OS X Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known OS X shortcuts to help you become a keyboard ninja. Read the article 20 Windows Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known Windows shortcuts to help you become a keyboard ninja. Read the article The 50 Best Registry Hacks that Make Windows Better We’re big fans of hacking the Windows Registry around here, and we’ve got one of the biggest collections of registry hacks you’ll find. Don’t believe us? Here’s a list of the top 50 registry hacks that we’ve covered. Read the article The Complete List of iPad Tips, Tricks, and Tutorials The Apple iPad is an amazing tablet, and to help you get the most out of it, we’ve put together a comprehensive list of every tip, trick, and tutorial for you. Read on for more. Read the article Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • Dynamic Class Inheritance For PHP

    - by VirtuosiMedia
    I have a situation where I think I might need dynamic class inheritance in PHP 5.3, but the idea doesn't sit well and I'm looking for a different design pattern to solve my problem if it's possible. Use Case I have a set of DB abstraction layer classes that dynamically compiles SQL queries, with one DAL class for each DB type (MySQL, MsSQL, Oracle, etc.). Each table in the database has its own class that extends the appropriate DAL class. The idea is that you interact with the table classes, but never directly use the DAL class. If you want to support a different DB type for your app, you don't need to rewrite any queries or even any code, you simply change a setting that swaps one DAL class out for another...and that's it. To give you a better idea of how this is used, you can take a look at the DAL class, the table classes, and how they are used on this StackExchange Code Review page. To really understand what I'm trying to do, please take a look at my implementation first before suggesting a solution. Issues The strategy that I had used previously was to have all of the DAL classes share the same class name. This eliminated autoloading, so I had to manually load the appropriate DAL class in a switch statement. However, this approach presents some problems for testing and documentation purposes, so I'd like to find a different way to solve the problem of loading the correct DAL class more elegantly. Update to clarify the issue The problem basically boils down to inconsistencies in the class name (pre-PHP 5.3) or class namespace (PHP 5.3) and its location in the directory structure. At this point, all of my DAL classes have the same name, DBObject, but reside in different folders, MySQL, Oracle, etc. My table classes all extend DBObject, but which DBObject they extend varies depending on which one has been loaded. Basically, I'm trying to have my cake and eat it too. The table classes act as a stable API and extend a dynamic backend, the DAL (DBObject) classes. It works great, but I outsmarted myself and because of the inconsistencies with the class names and their locations, I can't autoload the DBObject, which makes running unit tests and generating API docs impossible for the DBObject classes because the tests and docs rely on auto-loading. Just loading the appropriate DBObject into memory using a factory method won't work because there will be times when I need to load multiple DBObjects for testing. Because the classes currently share a name, this causes a class is already defined error. I can make exceptions for the DBObjects in my test code, obviously, but I'm looking for something a little less hacky as there may future instances where something similar would need to be done. Solutions? Worst case scenario, I can continue my current strategy, but I don't like it very much, especially as I'll soon be converting my code to PHP 5.3. I suspect that I can use some sort of dynamic inheritance via either namespaces (preferred) or a dynamic class extension, but I haven't been able to find good examples of this implemented in the wild. In your answers, please suggest either an alternate pattern that would work for this use case or an example of dynamic inheritance done right. Please assume PHP 5.3 with namespaced code. Any code examples are greatly encouraged. The preferred constraints for the solution are: DAL class can be autoloaded. DAL classes don't share the same exact same namespace, but share the same class name. As an example, I would prefer to use classes named DbObject that use namespaces like Vm\Db\MySql and Vm\Db\Oracle. Table classes don't have to be rewritten with a change in DB type. The appropriate DB type is determined via a single setting only. That setting is the only thing that should need to change to interchange DB types. Ideally, the setting check should occur only once per page load, but I'm flexible on that.

    Read the article

  • Oracle Text query parser

    - by Roger Ford
    Oracle Text provides a rich query syntax which enables powerful text searches.However, this syntax isn't intended for use by inexperienced end-users.  If you provide a simple search box in your application, you probably want users to be able to type "Google-like" searches into the box, and have your application convert that into something that Oracle Text understands.For example if your user types "windows nt networking" then you probably want to convert this into something like"windows ACCUM nt ACCUM networking".  But beware - "NT" is a reserved word, and needs to be escaped.  So let's escape all words:"{windows} ACCUM {nt} ACCUM {networking}".  That's fine - until you start introducing wild cards. Then you must escape only non-wildcarded searches:"win% ACCUM {nt} ACCUM {networking}".  There are quite a few other "gotchas" that you might encounter along the way.Then there's the issue of scoring.  Given a query for "oracle text query syntax", it would be nice if we could score a full phrase match higher than a hit where all four words are present but not in a phrase.  And then perhaps lower than that would be a document where three of the four terms are present.  Progressive relaxation helps you with this, but you need to code the "progression" yourself in most cases.To help with this, I've developed a query parser which will take queries in Google-like syntax, and convert them into Oracle Text queries. It's designed to be as flexible as possible, and will generate either simple queries or progressive relaxation queries. The input string will typically just be a string of words, such as "oracle text query syntax" but the grammar does allow for more complex expressions:  word : score will be improved if word exists  +word : word must exist  -word : word CANNOT exist  "phrase words" : words treated as phrase (may be preceded by + or -)  field:(expression) : find expression (which allows +,- and phrase as above) within "field". So for example if I searched for   +"oracle text" query +syntax -ctxcatThen the results would have to contain the phrase "oracle text" and the word syntax. Any documents mentioning ctxcat would be excluded from the results. All the instructions are in the top of the file (see "Downloads" at the bottom of this blog entry).  Please download the file, read the instructions, then try it out by running "parser.pls" in either SQL*Plus or SQL Developer.I am also uploading a test file "test.sql". You can run this and/or modify it to run your own tests or run against your own text index. test.sql is designed to be run from SQL*Plus and may not produce useful output in SQL Developer (or it may, I haven't tried it).I'm putting the code up here for testing and comments. I don't consider it "production ready" at this point, but would welcome feedback.  I'm particularly interested in comments such as "The instructions are unclear - I couldn't figure out how to do XXX" "It didn't work in my environment" (please provide as many details as possible) "We can't use it in our application" (why not?) "It needs to support XXX feature" "It produced an invalid query output when I fed in XXXX" Downloads: parser.pls test.sql

    Read the article

  • Silverlight Cream for December 16, 2010 -- #1011

    - by Dave Campbell
    In this Issue: John Papa, Tim Heuer, Jeff Blankenburg(-2-, -3-), Jesse Liberty, Jay Kimble, Wei-Meng Lee, Paul Sheriff, Mike Snow(-2-, -3-), Samuel Jack, James Ashley, and Peter Kuhn. Above the Fold: Silverlight: "Animation Texture Creator" Peter Kuhn WP7: "dows Phone from Scratch #13 — Custom Behaviors Part II: ActionTrigger" Jesse Liberty Shoutouts: Awesome blog post by Jesse Liberty about writing in general: Ten Requirements For Tutorials, Videos, Demos and White Papers That Don’t Suck From SilverlightCream.com: 1000 Silverlight Cream Posts and Counting! John Papa has Silverlight TV number 55 up and it's an inverview he did with me the day before the Firestarter in December... thanks John... great job in making me not look stooopid :) Silverlight service release today - 4.0.51204 Tim Heuer announced a service release of Silverlight ... check out his blog for the updates and near the bottom is a link to the developer runtime. What I Learned In WP7 – Issue #3 Jeff Blankenburg has been pushing out tips ... number 3 consisted of 3 good pieces of info for WP7 devs including more info about fonts and a good site for free audio files What I Learned In WP7 – Issue #4 In number 4, Jeff Blankenburg talks about where to get some nice free WP7 icons, and a link to a cool article on getting all sorts of device info What I Learned In WP7 – Issue #5 Number 5 finds Jeff Blankenburg giving up the XAP for a CodeMash sessiondata app... or wait for it to appear in the Marketplace next week. Windows Phone from Scratch #13 — Custom Behaviors Part II: ActionTrigger Wow... Jesse Liberty is up to number 13 in his Windows Phone from scratch series... this time it's part 2 of his Custom Behaviors post, and ActionTriggers specifically. Solving the Storage Problem in WP7 (for CF Developers) Jay Kimble has released his WP7 dropbox client to the wild ... this is cool for loading files at run-time... opens up some ideas for me at least. Building Location Service Apps in Windows Phone 7 Wei-Meng Lee has a big informative post on location services in WP7... getting a Bing Maps API key, getting the data, navigating and manipulating the map, adding pushpins... good stuff Using Xml Files on Windows Phone Paul Sheriff is discussing XML files as a database for your WP7 apps via LINQ to XML. Sample code included. ABC–Win7 App Mike Snow has been busy with Tips of the Day ... he published a children's app for tracing their ABC's and discusses some of the code bits involved. Win7 Mobile Application Bar – AG_E_PARSER_BAD_PROPERTY_VALUE Mike Snow's next post is about the infamous AG_E_PARSER_BAD_PROPERTY_VALUE error or worse in WP7 ... how he got it, and how he fixed it... could save you some hair... Forward Navigation on the Windows Phone Mike Snow's latest post is about forward navigation on the WP7 ... oh wait... there isn't any... check out the post. Day 2 of my “3 days to Build a Windows Phone 7 Game” challenge Samuel Jack details about 9 hours in day 2 of his quest to build an XNA app for WP7 from a cold start. Windows Phone 7 Side Loading James Ashley has a really complete write-up on side-loading apps onto your WP7 device. Don't get excited... this isn't a hack... this is instructions for side-loading using the Microsoft-approved methos, which means a registered device. Animation Texture Creator Remember Peter Kuhn's post the other day about an Animation Texture Creator? ... well today he has some added tweaks and the source code! ... thanks Peter! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • T-SQL Tuesday #025 &ndash; CHECK Constraint Tricks

    - by Most Valuable Yak (Rob Volk)
    Allen White (blog | twitter), marathoner, SQL Server MVP and presenter, and all-around awesome author is hosting this month's T-SQL Tuesday on sharing SQL Server Tips and Tricks.  And for those of you who have attended my Revenge: The SQL presentation, you know that I have 1 or 2 of them.  You'll also know that I don't recommend using anything I talk about in a production system, and will continue that advice here…although you might be sorely tempted.  Suffice it to say I'm not using these examples myself, but I think they're worth sharing anyway. Some of you have seen or read about SQL Server constraints and have applied them to your table designs…unless you're a vendor ;)…and may even use CHECK constraints to limit numeric values, or length of strings, allowable characters and such.  CHECK constraints can, however, do more than that, and can even provide enhanced security and other restrictions. One tip or trick that I didn't cover very well in the presentation is using constraints to do unusual things; specifically, limiting or preventing inserts into tables.  The idea was to use a CHECK constraint in a way that didn't depend on the actual data: -- create a table that cannot accept data CREATE TABLE dbo.JustTryIt(a BIT NOT NULL PRIMARY KEY, CONSTRAINT chk_no_insert CHECK (GETDATE()=GETDATE()+1)) INSERT dbo.JustTryIt VALUES(1)   I'll let you run that yourself, but I'm sure you'll see that this is a pretty stupid table to have, since the CHECK condition will always be false, and therefore will prevent any data from ever being inserted.  I can't remember why I used this example but it was for some vague and esoteric purpose that applies to about, maybe, zero people.  I come up with a lot of examples like that. However, if you realize that these CHECKs are not limited to column references, and if you explore the SQL Server function list, you could come up with a few that might be useful.  I'll let the names describe what they do instead of explaining them all: CREATE TABLE NoSA(a int not null, CONSTRAINT CHK_No_sa CHECK (SUSER_SNAME()<>'sa')) CREATE TABLE NoSysAdmin(a int not null, CONSTRAINT CHK_No_sysadmin CHECK (IS_SRVROLEMEMBER('sysadmin')=0)) CREATE TABLE NoAdHoc(a int not null, CONSTRAINT CHK_No_AdHoc CHECK (OBJECT_NAME(@@PROCID) IS NOT NULL)) CREATE TABLE NoAdHoc2(a int not null, CONSTRAINT CHK_No_AdHoc2 CHECK (@@NESTLEVEL>0)) CREATE TABLE NoCursors(a int not null, CONSTRAINT CHK_No_Cursors CHECK (@@CURSOR_ROWS=0)) CREATE TABLE ANSI_PADDING_ON(a int not null, CONSTRAINT CHK_ANSI_PADDING_ON CHECK (@@OPTIONS & 16=16)) CREATE TABLE TimeOfDay(a int not null, CONSTRAINT CHK_TimeOfDay CHECK (DATEPART(hour,GETDATE()) BETWEEN 0 AND 1)) GO -- log in as sa or a sysadmin server role member, and try this: INSERT NoSA VALUES(1) INSERT NoSysAdmin VALUES(1) -- note the difference when using sa vs. non-sa -- then try it again with a non-sysadmin login -- see if this works: INSERT NoAdHoc VALUES(1) INSERT NoAdHoc2 VALUES(1) GO -- then try this: CREATE PROCEDURE NotAdHoc @val1 int, @val2 int AS SET NOCOUNT ON; INSERT NoAdHoc VALUES(@val1) INSERT NoAdHoc2 VALUES(@val2) GO EXEC NotAdHoc 2,2 -- which values got inserted? SELECT * FROM NoAdHoc SELECT * FROM NoAdHoc2   -- and this one just makes me happy :) INSERT NoCursors VALUES(1) DECLARE curs CURSOR FOR SELECT 1 OPEN curs INSERT NoCursors VALUES(2) CLOSE curs DEALLOCATE curs INSERT NoCursors VALUES(3) SELECT * FROM NoCursors   I'll leave the ANSI_PADDING_ON and TimeOfDay tables for you to test on your own, I think you get the idea.  (Also take a look at the NoCursors example, notice anything interesting?)  The real eye-opener, for me anyway, is the ability to limit bad coding practices like cursors, ad-hoc SQL, and sa use/abuse by using declarative SQL objects.  I'm sure you can see how and why this would come up when discussing Revenge: The SQL.;) And the best part IMHO is that these work on pretty much any version of SQL Server, without needing Policy Based Management, DDL/login triggers, or similar tools to enforce best practices. All seriousness aside, I highly recommend that you spend some time letting your mind go wild with the possibilities and see how far you can take things.  There are no rules! (Hmmmm, what can I do with rules?) #TSQL2sDay

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • Construction Paper, Legos, and Architectural Modeling

    I can remember as a kid playing with construction paper and Legos to explore my imagination. Through my exploration I was able to build airplanes, footballs, guns, and more, out of paper. Additionally I could create entire cities, robots, or anything else I could image out of Legos.  These toys, I now realize were in fact tools that gave me an opportunity to explore my ideas in the physical world through the use of modeling.  My imagination was allowed to run wild as I, unknowingly at the time, made design decisions that directly affected the models I was building from the raw materials.  To prove my point further, I can remember building a paper airplane that seemed to go nowhere when I tried to throw it. So I decided to attach a paper clip to the plane before I decided to throw it the next time to test my concept that by adding more weight to the plane that it would fly better and for longer distances. The paper airplane allowed me to model my design decision through the use of creating an artifact in that I created a paper airplane that was carrying extra weight through the incorporation of the paper clip in to the design. Also, I remember using Legos to build all sorts of creations, and these creations became artifacts of my imagination. As I further and further defined my Lego creations through the process of playing I was able to create elaborate artifacts of my imagination. These artifacts represented design decision I had made in the evolution of my creation through my child like design process. In some form or fashion the artifacts I created as a kid are very similar to the artifacts that I create when I model a software architectural concept or a software design in that the process of making decisions is directly translated in to a tangible model in the form of an architectural model. Architectural models have been defined as artifacts that depict design decisions of a system’s architecture.  The act of creating architectural models is the act of architectural modeling. Furthermore, architectural modeling is the process of creating a physical model based architectural concepts and documenting these design decisions. In the process of creating models, the standard notation used is Architectural modeling notation. This notation is the primary method of capturing the essence of design decisions regarding architecture.  Modeling notations can vary based on the need and intent of a project; typically they range from natural language to a diagram based notation. Currently, Unified Markup Language (UML) is the industry standard in terms of architectural modeling notation  because allows for architectures to be defined through a series of boxes, lines, arrows and other basic symbols that encapsulate design designs in to virtual components, connectors, configurations and interfaces.  Furthermore UML allows for additional break down of models through the use of natural language as to explain each section of the model in plain English. One of the major factors in architectural modeling is to define what is to be modeled. As a basic rule of thumb, I tend to model architecture based on the complexity of systems or sub sub-systems of architecture. Another key factor is the level of detail that is actually needed for a model. For example if I am modeling a system for a CEO to view then the low level details will be omitted. In comparison, if I was modeling a system for another engineer to actually implement I would include as much detailed information as I could to help the engineer implement my design.

    Read the article

  • Pointing non-www to a spcific sub-directory

    - by Ben Sinclair
    I might be going about this all wrong so let me know if I am. I am creating software that allows people to sign up and have their own sub-domain on my website. So say my website is ben.com, they could have their own sub-domain called juice.ben.com. When they type their sub-domain juice.ben.com in their address bar, it will load the contents in a root directory. I have also set-up a .htaccess redirect to redirect www.ben.com to ben.com. Not sure if this matters with my question but I thought I'd mention it. Ok, so basically what I think I need to do is put the software they they've signed up to in the root directory. So when someone goes to juice.ben.com, they will be pointed to the root directory (I beleive I cans et-up wild card sub-domains with my host) and the software will then analyse their sub-domain and then display their account. Now, if someone just types in ben.com into their browser, I want it to show the contents of the ben.com/_website/ folder but still show in the address bar that they are still in the root directory. Hopefully I am making sense :) Is this possible with htaccess? If so, what do I need to do?

    Read the article

  • Using IsolatedStorage on a IIS server

    - by JoeBilly
    I'am a bit confusing about the use of Isolated Storage on an IIS server. I understand the goal of Isolated Storage : provides a safe place to store data with no worry about how and where is this place. Since Isolated Storage has a by-user and by-assembly approach, I'am not to wild about using it on a IIS server where applications have almost their own identity. I haven't really seen the interest of impersonating a web application and almost never seen impersonated web applications myself but this is my point of view. Using Isolated Storage on a server mean : Using Isolated stores in \Documents and Settings\<user>\ Which mean \Documents and Settings\Default User\ when the application pool is owned by Local System or Network Services I guess Which also mean Write rights on this folder for Local System or Network Services Using of impersonation Regarding a web application (logic), these ideas are confusing me... Document and Settings ? Default User ? Enable impersonation just for storage ? No control about storage on server ? Uh ? And then I'am a front of a dilema : use System.IO.Packaging (with Isolated Storage inside) on web applications or find an alternative ? Am I wrong in my approach ? Did I miss something ? Any point of view is appreciated and an explanation about the Isolated Storage with IIS philosophy could be an anwser. Thanks !

    Read the article

  • Is there a Standard or Best Practice for Perl Programs, as opposed to Perl Modules?

    - by swestrup
    I've written any number of perl modules in the past, and more than a few stand-alone perl programs, but I've never released a multi-file perl program into the wild before. I have a perl program that is almost at the beta stage and is going to be released open source. It requires a number of data files, as well as some external perl modules -- some I've written myself, and some from CPAN -- that I'll have to bundle with it so as to ensure that someone can just download my program and install it without worrying about hunting for obscure modules. So, it sounds to me like I need to write an installer to copy all the files to standard locations so that a user can easily install everything. The trouble is, I have no idea what the standard practice would be for this. I have found lots of tutorials on perl module standards, but none on perl program standards. Does anyone have any pointers to standard paths, installation proceedures, etc, for perl programs? This is going to be complicated by the fact that the program is multi-platform. I've been testing it in Linux, but its designed to work equally well in Windows.

    Read the article

  • Daylight Savings Handling in DateDiff() in MS Access?

    - by PowerUser
    I am fully aware of DateDiff()'s inability to handle daylight savings issues. Since I often use it to compare the number of hours or days between 2 datetimes several months apart, I need to write up a solution to handle DST. This is what I came up with, a function that first subtracts 60 minutes from a datetime value if it falls within the date ranges specified in a local table (LU_DST). Thus, the usage would be: datediff("n",Conv_DST_to_Local([date1]),Conv_DST_to_Local([date2])) My question is: Is there a better way to handle this? I'm going to make a wild guess that I'm not the first person with this question. This seems like the kind of thing that should have been added to one of the core reference libraries. Is there a way for me to access my system clock to ask it if DST was in effect at a certain date & time? Function Conv_DST_to_Local(X As Date) As Date Dim rst As DAO.Recordset Set rst = CurrentDb.OpenRecordset("LU_DST") Conv_DST_to_Local = X While rst.EOF = False If X > rst.Fields(0) And X < rst.Fields(1) Then Conv_DST_to_Local = DateAdd("n", -60, X) rst.MoveNext Wend End Function Notes I have visited and imported the BAS file of http://www.cpearson.com/excel/TimeZoneAndDaylightTime.aspx. I spent at least an hour by now reading through it and, while it may do its job well, I can't figure out how to modify it to my needs. But if you have an answer using his data structures, I'll take a look. Timezones are not an issue since this is all local time.

    Read the article

  • Versions. "Is not a working copy"

    - by bartclaeys
    A little background first: I'm a designer/developer and decided to use subversion for a personal project. I'm the only one working on this project. I've setup a Beanstalk account and installed Versions on Mac. Locally I have MySQL and PHP running through MAMP. What I want to do is develop locally and push code into Beanstalk. I'm not planning on deploying from Beanstalk to my live server at this moment. In Beanstalk I created a repository and imported all my code. I then installed Versions and added a bookmark to the Beanstalk repository. So far so good. Next I guess (this is a wild guess) I need to add a so called 'working copy bookmark' so that Versions can watch my local copy for changes and commit it to my Beanstalk repository. Problem: When I click 'Create working copy bookmark' in Versions and I select a folder on my computer I get the error: '/Applications/MAMP/www_mydomain' is not a working copy' I have no clue what that means and now I'm stuck. How can I tell Versions to keep track of changes of a local folder?

    Read the article

  • How to Debug an exception: Type is not marked as serializable... when the type is marked as serializ

    - by rism
    I'm trying to: ((Request.Params["crmid"] != null)) in a web page. But it keeps throwing a serialzation exception: Type 'QC.Security.SL.SiteUser' in assembly 'QC.Security, Version=1.0.0.1, Culture=neutral, PublicKeyToken=null' is not marked as serializable. The type is however marked as serializable as follows: [Serializable()] public class SiteUser : IIdentity { private long _userId; public long UserId { get { return _userId; } set { _userId = value; } } private string _name; public string Name { get { return _name; } } private bool _isAuthenticated; public bool IsAuthenticated { get { return _isAuthenticated; } } private string _authenticationType; public string AuthenticationType { get { return _authenticationType; } } I've no idea how to debug this as I cant step into the serializer code to find out why its falling over. The call stack is only one frame deep before it hits [External Code]. And the error message is next to useless given that type is clearly marked as serializable. It was working fine. But now "all of a sudden" it doesn't which typically means some dumb bug in Visual Studio but rebooting doesn't help "this" time. So now I dont know if it's a stupid VS bug or a completely unrelated error for which Im getting a serialization exception or something I'm doing wrong. The truth is I just dont trust VS anymore given the number of wild goose chases Ive been on over the last several months which were "fixed" by rebooting VS 2008 or some other rediculous workaround.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >