Search Results

Search found 2395 results on 96 pages for 'christopher chance'.

Page 94/96 | < Previous Page | 90 91 92 93 94 95 96  | Next Page >

  • Clean Code Development & Flexible work environment - MSCC 26.10.2013

    Finally, some spare time to summarize my impressions and experiences of the recent meetup of Mauritius Software Craftsmanship Community. I already posted my comment on the event and on our social media networks: Professional - It's getting better with our meetups and I really appreciated that 'seniors' and 'juniors' were present today. Despite running a little bit out of time it was really great to see more students coming to the gathering. This time we changed location for our Saturday meetup and it worked out very well. A big thank you to Ebene Accelerator, namely Mrs Poonum, for the ability to use their meeting rooms for our community get-together. Already some weeks ago I had a very pleasant conversation with her about the MSCC aims, 'mission' and how we organise things. Additionally, I think that an environment like the Ebene Accelerator is a good choice as it acts as an incubator for young developers and start-ups. Reactions from other craftsmen Before I put my thoughts about our recent meeting down, I'd like to mention and cross-link to some of the other craftsmen that were present: "MSCC meet up is a massive knowledge gaining strategies for students, future entrepreneurs, or for geeks all around. Knowledge sharing becomes a fun. For those who have not been able to made it do subscribe on our MSCC meet up group at meetup.com." -- Nitin on Learning is fun with #MSCC #Ebene Accelerator "We then talked about the IT industry in Mauritius, salary issues in various field like system administration, software development etc. We analysed the reasons why people tend to hop from one company to another. That was a fun debate." -- Ish on MSCC meetup - Gang of Geeks "Flexible Learning Environment was quite interesting since these lines struck cords : "You're not a secretary....9 to 5 shouldn't suit you"....This allowed reflection...deep reflection....especially regarding the local mindset...which should be changed in a way which would promote creativity rather than choking it till death..." -- Yannick on 2nd MSCC Monthly Meet-up And others on Facebook... ;-) Visual impressions are available on our Meetup event page. More first time attendees We great pleasure I noticed that we have once again more first time visitors. A quick overlook showed that we had a majority of UoM students in first, second or last year. Some of them are already participating in the UoM Computer Club or are nominated as members of the Microsoft Student Partner (MSP) programme. Personally, I really appreciate the fact that the MSCC is able to gather such a broad audience. And as I wrote initially, the MSCC is technology-agnostic; we want IT people from any segment of this business. Of course, students which are about to delve into the 'real world' of working are highly welcome, and I hope that they might get one or other glimpse of experience or advice from employees. Sticking to the schedule? No, not really... And honestly, it was a good choice to go a little bit of the beaten tracks. I mean, yes we have a 'rough' agenda of topics that we would like to talk about or having a presentation about. But we keep it 'agile'. Due to the high number of new faces, we initiated another quick round of introductions and I gave a really brief overview of the MSCC. Next, we started to reflect on the Clean Code Developer (CCD) - Red Grade which we introduced on the last meetup. Nirvan was the lucky one and he did a good job on summarizing the various abbreviations of the first level of being a CCD. Actually, more interesting, we exchanged experience about the principles and practices of Red Grade, and it was very informative to get to know that Yann actually 'interviewed' a couple of friends, other students, local guys working in IT companies as well as some IT friends from India in order to counter-check on what he learned first-hand about Clean Code. Currently, he is reading the book of Robert C. Martin on that topic and I'm looking forward to his review soon. More output generates more input What seems to be like a personal mantra is working out pretty well for me since the beginning of this year. Being more active on social media networks, writing more article on my blog, starting the Mauritius Software Craftsmanship Community, and contributing more to other online communities has helped me to receive more project requests, job offers and possibilities to expand my business at IOS Indian Ocean Software Ltd. Actually, it is not a coincidence that one of the questions new craftsmen should answer during registration asks about having a personal blog. Whether you are just curious about IT, right in the middle of your Computer Studies, or already working in software development or system administration since a while you should consider to advertise and market yourself online. Easiest way to resolve this are to have online profiles on professional social media networks like LinkedIn, Xing, Twitter, and Google+ (no Facebook should be considered for private only), and considering to have a personal blog. Why? -- Be yourself, be proud of your work, and let other people know that you're passionate about your profession. Trust me, this is going to open up opportunities you might not have dreamt about... Exchanging ideas about having a professional online presence - MSCC meetup on the 26th October 2013 Furthermore, consider to put your Curriculum Vitae online, too. There are quite a number of service providers like 1ClickCV, Stack Overflow Careers 2.0, etc. which give you the ability to have an up to date CV online. At least put it on your site, next to your personal blog. Similar to what you would be able to see on my site here. Cyber Island Mauritius - are we there? A couple of weeks ago I got a 'cold' message on LinkedIn from someone living in the U.S. asking about the circumstances and conditions of the IT world of Mauritius. He has a great business idea, venture capital and is currently looking for a team of software developers (mainly mobile - iOS) for a new startup here in Mauritius. Since then we exchanged quite some details through private messages and Skype conversations, and I suggested that it might be a good chance to join our meetup through a conference call and see for yourself about potential candidates. During approximately 30 to 40 minutes the brief idea of the new startup was presented - very promising state-of-the-art technology aspects and integration of various public APIs -, and we had a good Q&A session about it. Also thanks to the excellent bandwidth provided by the Ebene Accelerator the video conference between three parties went absolutely well. Clean Code Developer - Orange Grade Hahaha - nice one... Being at the Orange Tower at Ebene and then talking about an Orange Grade as CCD. Well, once again I provided an overview of the principles and practices in that rank of Clean Code, and similar to our last meetup we discussed on the various aspect of each principle, whether someone already got in touch with it during studies or work, and how it could affect their future view on their source code. Following are the principles and practices of Clean Code Developer - Orange Grade: CCD Orange Grade - Principles Single Level of Abstraction (SLA) Single Responsibility Principle (SRP) Separation of Concerns (SoC) Source Code conventions CCD Orange Grade - Practices Issue Tracking Automated Integration Tests Reading, Reading, Reading Reviews Especially the part on reading technical books got some extra attention. We quickly gathered our views on that and came up with a result that ranges between Zero (0) and up to Fifteen (15) book titles per year. Personally, I'm keeping my progress between Six (6) and Eight (8) titles per year, but at least One (1) per quarter of a year. Which is also connected to the fact that I'm participating in the O'Reilly Reader Review Program and have a another benefit to get access to free books only by writing and publishing a review afterwards. We also had a good exchange on the extended topic of 'Reviews' - which to my opinion is abnormal difficult here in Mauritius for various reasons. As far as I can tell from my experience working with Mauritian software developers, either as colleagues, employees or during consulting services there are unfortunately two dominant pattern on that topic: Keeping quiet Running away Honestly, I have no evidence about why these are the two 'solutions' on reviews but that's the situation that I had to face over the last couple of years. Sitting together and talking about problematic issues, tackling down root causes of de-motivational activities and working on general improvements doesn't seem to have a ground within the IT world of Mauritius. Are you a typist or a creative software craftsman? - MSCC meetup on the 26th October 2013 One very good example that we talked about was the fact of 'job hoppers' as you can easily observe it on someone's CV - those people change job every single year; for no obvious reason! Frankly speaking, I wouldn't even consider an IT person like to for an interview. As a company you're investing money and effort into the abilities of your employees. Hiring someone that won't stay for a longer period is out of question. And sorry to say, these kind of IT guys smell fishy about their capabilities and more likely to cause problems than actually produce productive results. One of the reasons why there is a probation period on an employment contract is to give you the liberty to leave as early as possible in case that you don't like your new position. Don't fool yourself or waste other people's time and money by hanging around a full year only to snatch off the bonus payment... Future outlook: Developer's Conference Even though it is not official yet I already mentioned it several times during our weekly Code & Coffee sessions. The MSCC is looking forward to be able to organise or to contribute to an upcoming IT event. Currently, the rough schedule is set for April 2014 but this mainly depends on availability of location(s), a decent time frame for preparations, and the underlying procedures with public bodies to have it approved and so on. As soon as the information about date and location has been fixed there will be a 'Call for Papers' period in order to attract local IT enthusiasts to apply for a session slot and talk about their field of work and their passion in IT. More to come for sure... My resume of the day It was a great gathering and I am very pleased about the fact that we had another 15 craftsmen (plus 2 businessmen on conference call plus 2 young apprentices) in the same room, talking about IT related topics and sharing their experience as employees and students. Personally, I really appreciated the feedback from the students about their current view on their future career, and I really hope that some of them are going to pursue their dreams. Start promoting yourself and it will happen... Looking forward to your blogs! And last but not least our numbers on Meetup and Facebook have been increased as a direct consequence of this meetup. Please, spread the word about the MSCC and get your friends and colleagues to join our official site. The higher the number of craftsmen we have the better chances we have t achieve something great! Thanks!

    Read the article

  • CodePlex Daily Summary for Sunday, July 21, 2013

    CodePlex Daily Summary for Sunday, July 21, 2013Popular ReleasesMagick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6.MISAO: Ver. 5.33: Latest app and add-insC# Intellisense for Notepad++: Initial release: Members auto-complete Integration with native Notepad++ Auto-Completion Auto "open bracket" for methods Right-arrow to accept suggestions51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesDaRenamer: Renamer 2.1.0.5: Version 2.1.0.5 -fixed minor bugInstall Verify Tool: Install Verify Tool V 1.0: Win Service Web Service Win Service Client Web Service ClientOrchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0PMU Connection Tester: PMU Connection Tester v4.4.0: This is the current release build of the PMU Connection Tester, version 4.4.0 This version of the connection tester was released with openPDC 1.5 SP1 and openPDC 2.0 BETA. This application requires that .NET 4.0 already be installed on your system. Note this is the last release of the PMU Connection Tester that will built on .NET 4.0 using the TVA Code Library and the Time-series Framework. Future releases of the PMU Connection Tester will be built on .NET 4.5 (or later) using the Grid Sol...HiUpdateTools - easy publish and update your app: HiUpdateTools Add-in 1.0.0.5: - Generate ClientConfig.xml and adding to the project - Set ClientConfig.xml option "CopyToOutputDirectory"= Copy if newer - Fix client path not ending the backslash - Add Client assembly to VSX package - On first use, the tool is added to the reference to the client assembly - Fix client application - Multi-instance application - Run single instance of update applicationopen gaze and mouse analyzer: Ogama 4.4 BETA: This beta was published on 16.07.2013 and includes fixes and improvements since last 4.3 release, mainly in the recording section which solves problems with tobii and mirametrix devices, see the source code tab for details. Please test it, if you have one of this devices and give me feedback using the issue tracker or discussion tabs. Don´t forget to install .Net 4 framework and SQL Express before installing Ogama. When using Tobii tracking devices, you have to install apple bonjour also. On...SpaceFlight: SpaceFlight_v1.1: Added VCRedist.exe , run this first if you get the "MSVCP100.dll is missing" issueAdvanced Resource Tab for Blend: Advanced Resource Tab 2.0: Added filtering of (sub)-resource items and collapsing / expanding of all resource dictionaries.Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTNew ProjectsBlindspot: This project aims to create a fully-functional windowless desktop application allowing blind/visually impaired music fans the chance to access Spotify.HelloReading: ???????MonteMediaCC: The Monte Media Library is a Java library for processing media data. Supported media formats include still images, video, audio and meta-data. NETDeob: Deobfuscator and Unpacker for .NET Files.NthCatalanNumber: Write a program to calculate the Nth Catalan number by given N. http://en.wikipedia.org/wiki/Catalan_numberPokemon Battle Online 0791: ???????project site: the-west minimapPSeG Server FIles: PSeG Server FilesSample VariableSizedWrapGrid: This is an example of the use of particular VariableSizedWrapGrid GridView control. Where we can set the size of each item as needed It can make an appearance oServices Monitoring Management Pack: La supervision des services automatiques est un élément qui est déficient dans Operations Manager. Ce « Management Pack » sert à surveiller les services automatSinaIsTestingHisNewProject: this project is only for testing this site and any copying without my permission will be sewed by me and will be tracked by CIA and FBI and will be HeadShotSum of a sequence: Write a program that, for a given two integer numbers N and X, calculates the sum S = 1 + 1!/X + 2!/X2 + … + N!/XN Synchrophasor Analytics: Synchrophasor Analytics is a front end data processing and conditioning for downstream phasor based applications and an extension for development and analysis.tcp-bridge: a tcp bridge service to redirect incoming connections to another machine by using another incoming or outgoing connectionTypeScript Class Library: The TypeScript Class Library WPDialog: Library for developing app dialogs for Windows phone similar to Monotouch.DialogWPF File Renamer: Simple file renaming application made to brush up on my WPF data binding and MVVM skills.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • Problems with Google Maps API v3 + jQuery UI Tabs

    - by Bears will eat you
    There are a number of problems, which seem to be fairly well-known, when using the Google Maps API to render a map within a jQuery UI tab. I've seen SO questions posted about similar issues (here and here, for example) but the solutions there only seem to work for v2 of the Maps API. Other references I checked out are here and here, along with pretty much everything I could dig up through Googling. I've been trying to stuff a map (using v3 of the API) into a jQuery tab with mixed results. I'm using the latest versions of everything (currently jQuery 1.3.2, jQuery UI 1.7.2, don't know about Maps). This is the markup & javascript: <body> <div id="dashtabs"> <span class="logout"> <a href="go away">Log out</a> </span> <!-- tabs --> <ul class="dashtabNavigation"> <li><a href="#first_tab" >First</a></li> <li><a href="#second_tab" >Second</a></li> <li><a href="#map_tab" >Map</a></li> </ul> <!-- tab containers --> <div id="first_tab">This is my first tab</div> <div id="second_tab">This is my second tab</div> <div id="map_tab"> <div id="map_canvas"></div> </div> </div> </body> and $(document).ready(function() { var map = null; $('#dashtabs').tabs(); $('#dashtabs').bind('tabsshow', function(event, ui) { if (ui.panel.id == 'map_tab' && !map) { map = initializeMap(); google.maps.event.trigger(map, 'resize'); } }); }); function initializeMap() { // Just some canned map for now var latlng = new google.maps.LatLng(-34.397, 150.644); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; return new google.maps.Map($('#map_canvas')[0], myOptions); } And here's what I've found that does/doesn't work (for Maps API v3): Using the off-left technique as described in the jQuery UI Tabs documentation (and in the answers to the two questions I linked) doesn't work at all. In fact, the best-functioning code uses the CSS .ui-tabs .ui-tabs-hide { display: none; } instead. The only way to get a map to display in a tab at all is to set the CSS width and height of #map_canvas to be absolute values. Changing the width and height to auto or 100% causes the map to not display at all, even if it's already been successfully rendered (using absolute width and height). I couldn't find it documented anywhere outside of the Maps API, but map.checkResize() won't work anymore. Instead, you have to fire a resize event by calling google.maps.event.trigger(map, 'resize'). If the map is not initialized inside of a function bound to a tabsshow event, the map itself is rendered correctly but the controls are not - most are just plain missing. So, here are my questions: Does anyone else have experience accomplishing this same feat? If so, how did you figure out what would actually work, since the documented tricks don't work for Maps API v3? What about loading tab content using Ajax as per the jQuery UI docs? I haven't had a chance to play around with it but my guess is that it's going to break Maps even more. What are the chances of getting it to work (or is it not worth trying)? How do I make the map fill the largest possible area? I'd like it to fill the tab and adapt to page resizes, much in the way that it's done over at maps.google.com. But, as I said, I appear to be stuck with applying only absolute width and height CSS to the map div. Sorry if this was long-winded but this might be the only documentation for Maps API v3 + jQuery tabs. Cheers!

    Read the article

  • threaded serial port IOException when writing

    - by John McDonald
    Hi, I'm trying to write a small application that simply reads data from a socket, extracts some information (two integers) from the data and sends the extracted information off on a serial port. The idea is that it should start and just keep going. In short, it works, but not for long. After a consistently short period I start to receive IOExceptions and socket receive buffer is swamped. The thread framework has been taken from the MSDN serial port example. The delay in send(), readThread.Join(), is an effort to delay read() in order to allow serial port interrupt processing a chance to occur, but I think I've misinterpreted the join function. I either need to sync the processes more effectively or throw some data away as it comes in off the socket, which would be fine. The integer data is controlling a pan tilt unit and I'm sure four times a second would be acceptable, but not sure on how to best acheive either, any ideas would be greatly appreciated, cheers. using System; using System.Collections.Generic; using System.Text; using System.IO.Ports; using System.Threading; using System.Net; using System.Net.Sockets; using System.IO; namespace ConsoleApplication1 { class Program { static bool _continue; static SerialPort _serialPort; static Thread readThread; static Thread sendThread; static String sendString; static Socket s; static int byteCount; static Byte[] bytesReceived; // synchronise send and receive threads static bool dataReceived; const int FIONREAD = 0x4004667F; static void Main(string[] args) { dataReceived = false; readThread = new Thread(Read); sendThread = new Thread(Send); bytesReceived = new Byte[16384]; // Create a new SerialPort object with default settings. _serialPort = new SerialPort("COM4", 38400, Parity.None, 8, StopBits.One); // Set the read/write timeouts _serialPort.WriteTimeout = 500; _serialPort.Open(); string moveMode = "CV "; _serialPort.WriteLine(moveMode); s = null; IPHostEntry hostEntry = Dns.GetHostEntry("localhost"); foreach (IPAddress address in hostEntry.AddressList) { IPEndPoint ipe = new IPEndPoint(address, 10001); Socket tempSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp); tempSocket.Connect(ipe); if (tempSocket.Connected) { s = tempSocket; s.ReceiveBufferSize = 16384; break; } else { continue; } } readThread.Start(); sendThread.Start(); while (_continue) { Thread.Sleep(10); ;// Console.WriteLine("main..."); } readThread.Join(); _serialPort.Close(); s.Close(); } public static void Read() { while (_continue) { try { //Console.WriteLine("Read"); if (!dataReceived) { byte[] outValue = BitConverter.GetBytes(0); // Check how many bytes have been received. s.IOControl(FIONREAD, null, outValue); uint bytesAvailable = BitConverter.ToUInt32(outValue, 0); if (bytesAvailable > 0) { Console.WriteLine("Read thread..." + bytesAvailable); byteCount = s.Receive(bytesReceived); string str = Encoding.ASCII.GetString(bytesReceived); //str = Encoding::UTF8->GetString( bytesReceived ); string[] split = str.Split(new Char[] { '\t', '\r', '\n' }); string filteredX = (split.GetValue(7)).ToString(); string filteredY = (split.GetValue(8)).ToString(); string[] AzSplit = filteredX.Split(new Char[] { '.' }); filteredX = (AzSplit.GetValue(0)).ToString(); string[] ElSplit = filteredY.Split(new Char[] { '.' }); filteredY = (ElSplit.GetValue(0)).ToString(); // scale values int x = (int)(Convert.ToInt32(filteredX) * 1.9); string scaledAz = x.ToString(); int y = (int)(Convert.ToInt32(filteredY) * 1.9); string scaledEl = y.ToString(); String moveAz = "PS" + scaledAz + " "; String moveEl = "TS" + scaledEl + " "; sendString = moveAz + moveEl; dataReceived = true; } } } catch (TimeoutException) {Console.WriteLine("timeout exception");} catch (NullReferenceException) {Console.WriteLine("Read NULL reference exception");} } } public static void Send() { while (_continue) { try { if (dataReceived) { // sleep Read() thread to allow serial port interrupt processing readThread.Join(100); // send command to PTU dataReceived = false; Console.WriteLine(sendString); _serialPort.WriteLine(sendString); } } catch (TimeoutException) { Console.WriteLine("Timeout exception"); } catch (IOException) { Console.WriteLine("IOException exception"); } catch (NullReferenceException) { Console.WriteLine("Send NULL reference exception"); } } } } }

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • PHP - XML Feed get print values

    - by danit
    Here is my feed: <entry> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')</id> <title type="text">Building Web Applications with Microsoft SQL Azure</title> <summary type="text">SQL Azure provides a highly available and scalable relational database engine in the cloud. In this demo-intensive and interactive session, learn how to quickly build web applications with SQL Azure Databases and familiar web technologies. We demonstrate how you can quickly provision, build and populate a new SQL Azure database directly from your web browser. Also, see firsthand several new enhancements we are adding to SQL Azure based on the feedback we&#x2019;ve received from the community since launching the service earlier this year.</summary> <published>2010-01-25T00:00:00-05:00</published> <updated>2010-03-05T01:07:05-05:00</updated> <author> <name /> </author> <link rel="edit" title="Session" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Speakers" type="application/atom+xml;type=feed" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers"> <m:inline> <feed> <title type="text">Speakers</title> <id>http://api.visitmix.com/OData.svc/Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers</id> <updated>2010-03-25T11:56:06Z</updated> <link rel="self" title="Speakers" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Speakers" /> <entry> <id>http://api.visitmix.com/OData.svc/Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')</id> <title type="text">David-Robinson</title> <summary type="text"></summary> <updated>2010-03-25T11:56:06Z</updated> <author> <name>David Robinson</name> </author> <link rel="edit-media" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/$value" /> <link rel="edit" title="Speaker" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Sessions" type="application/atom+xml;type=feed" title="Sessions" href="Speakers(guid'3395ee85-d994-423c-a726-76b60a896d2a')/Sessions" /> <category term="EventModel.Speaker" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="image/jpeg" src="http://live.visitmix.com/Content/images/speakers/lrg/default.jpg" /> <m:properties xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices"> <d:SpeakerID m:type="Edm.Guid">3395ee85-d994-423c-a726-76b60a896d2a</d:SpeakerID> <d:SpeakerFirstName>David</d:SpeakerFirstName> <d:SpeakerLastName>Robinson</d:SpeakerLastName> <d:LargeImage m:null="true"></d:LargeImage> <d:SmallImage m:null="true"></d:SmallImage> <d:Twitter m:null="true"></d:Twitter> </m:properties> </entry> </feed> </m:inline> </link> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Tags" type="application/atom+xml;type=feed" title="Tags" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Tags" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/Files" type="application/atom+xml;type=feed" title="Files" href="Sessions(guid'816995df-b09a-447a-9391-019512f643a0')/Files" /> <category term="EventModel.Session" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:SessionID m:type="Edm.Guid">816995df-b09a-447a-9391-019512f643a0</d:SessionID> <d:Location>Breakers L</d:Location> <d:Type>Seminar</d:Type> <d:Code>SVC07</d:Code> <d:StartTime m:type="Edm.DateTime">2010-03-17T12:00:00</d:StartTime> <d:EndTime m:type="Edm.DateTime">2010-03-17T13:00:00</d:EndTime> <d:Slug>SVC07</d:Slug> <d:CreatedDate m:type="Edm.DateTime">2010-01-26T18:14:24.687</d:CreatedDate> <d:SourceID m:type="Edm.Guid">cddca9b7-6830-4d06-af93-5fd87afb67b0</d:SourceID> </m:properties> </content> </entry> I want to print the: Session Title (Building Web Applications with Microsoft SQL Azure) The Author (David Robinson) The Location (Breakers L) And display the speakers image (http://live.visitmix.com/Content/images/speakers/lrg/default.jpg) I presume I can use filegetcontents and then transform to simplexmlstring, but I dont know how to get the deeper items in I want, like Author, and image. Any chance of a bit of coding genius here?

    Read the article

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • What is the right tool to detect VMT or heap corruption in Delphi ?

    - by Roland Bengtsson
    I'm a member in a team that use Delphi 2007 for a larger application and we suspect heap corruption because sometimes there are strange bugs that have no other explanation. I believe that the Rangechecking option for the compiler is only for arrays. I want a tool that give an exception or log when there is a write on a memory address that is not allocated by the application. Regards EDIT: The error is of type: Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD EDIT2: Thanks for all suggestions. Unfortunately I think that the solution is deeper than that. We use a patched version of Bold for Delphi as we own the source. Probably there are some errors introduced in the Bold framwork. Yes we have a log with callstacks that are handled by JCL and also trace messages. So a callstack with the exception can lock like this: 20091210 16:02:29 (2356) [EXCEPTION] Raised EBold: Failed to derive ServerSession.mayDropSession: Boolean OCL expression: not active and not idle and timeout and (ApplicationKernel.allinstances->first.CurrentSession <> self) Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD. At Location BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016) Inner Exception Raised EBold: Failed to derive ServerSession.mayDropSession: Boolean OCL expression: not active and not idle and timeout and (ApplicationKernel.allinstances->first.CurrentSession <> self) Error: Access violation at address 00404E78 in module 'BoatLogisticsAMCAttracsServer.exe'. Read of address FFFFFFDD. At Location BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016) Inner Exception Call Stack: [00] System.TObject.InheritsFrom (sys\system.pas:9237) Call Stack: [00] BoldSystem.TBoldMember.CalculateDerivedMemberWithExpression (BoldSystem.pas:4016) [01] BoldSystem.TBoldMember.DeriveMember (BoldSystem.pas:3846) [02] BoldSystem.TBoldMemberDeriver.DoDeriveAndSubscribe (BoldSystem.pas:7491) [03] BoldDeriver.TBoldAbstractDeriver.DeriveAndSubscribe (BoldDeriver.pas:180) [04] BoldDeriver.TBoldAbstractDeriver.SetDeriverState (BoldDeriver.pas:262) [05] BoldDeriver.TBoldAbstractDeriver.Derive (BoldDeriver.pas:117) [06] BoldDeriver.TBoldAbstractDeriver.EnsureCurrent (BoldDeriver.pas:196) [07] BoldSystem.TBoldMember.EnsureContentsCurrent (BoldSystem.pas:4245) [08] BoldSystem.TBoldAttribute.EnsureNotNull (BoldSystem.pas:4813) [09] BoldAttributes.TBABoolean.GetAsBoolean (BoldAttributes.pas:3069) [10] BusinessClasses.TLogonSession._GetMayDropSession (code\BusinessClasses.pas:31854) [11] DMAttracsTimers.TAttracsTimerDataModule.RemoveDanglingLogonSessions (code\DMAttracsTimers.pas:237) [12] DMAttracsTimers.TAttracsTimerDataModule.UpdateServerTimeOnTimerTrig (code\DMAttracsTimers.pas:482) [13] DMAttracsTimers.TAttracsTimerDataModule.TimerKernelWork (code\DMAttracsTimers.pas:551) [14] DMAttracsTimers.TAttracsTimerDataModule.AttracsTimerTimer (code\DMAttracsTimers.pas:600) [15] ExtCtrls.TTimer.Timer (ExtCtrls.pas:2281) [16] Classes.StdWndProc (common\Classes.pas:11583) The inner exception part is the callstack at the moment an exception is reraised. EDIT3: The theory right now is that the Virtual Memory Table (VMT) is somehow broken. When this happen there is no indication of it. Only when a method is called an exception is raised (ALWAYS on address FFFFFFDD, -35 decimal) but then it is too late. You don't know the real cause for the error. Any hint of how to catch a bug like this is really appreciated!!! We have tried with SafeMM, but the problem is that the memory consumption is too high even when the 3 GB flag is used. So now I try to give a bounty to the SO community :) EDIT4: One hint is that according the log there is often (or even always) another exception before this. It can be for example optimistic locking in the database. We have tried to raise exceptions by force but in test environment it just works fine. EDIT5: Story continues... I did a search on the logs for the last 30 days now. The result: "Read of address FFFFFFDB" 0 "Read of address FFFFFFDC" 24 "Read of address FFFFFFDD" 270 "Read of address FFFFFFDE" 22 "Read of address FFFFFFDF" 7 "Read of address FFFFFFE0" 20 "Read of address FFFFFFE1" 0 So the current theory is that an enum (there is a lots in Bold) overwrite a pointer. I got 5 hits with different address above. It could mean that the enum holds 5 values where the second one is most used. If there is an exception a rollback should occur for the database and Boldobjects should be destroyed. Maybe there is a chance that not everything is destroyed and a enum still can write to an address location. If this is true maybe it is possible to search the code by a regexpr for an enum with 5 values ? EDIT6: To summarize, no there is no solution to the problem yet. I realize that I may mislead you a bit with the callstack. Yes there are a timer in that but there are other callstacks without a timer. Sorry for that. But there are 2 common factors. An exception with Read of address FFFFFFxx. Top of callstack is System.TObject.InheritsFrom (sys\system.pas:9237) This convince me that VilleK best describe the problem. I'm also convinced that the problem is somewhere in the Bold framework. But the BIG question is, how can problems like this be solved ? It is not enough to have an Assert like VilleK suggest as the damage has already happened and the callstack is gone at that moment. So to describe my view of what may cause the error: Somewhere a pointer is assigned a bad value 1, but it can be also 0, 2, 3 etc. An object is assigned to that pointer. There is method call in the objects baseclass. This cause method TObject.InheritsForm to be called and an exception appear on address FFFFFFDD. Those 3 events can be together in the code but they may also be used much later. I think this is true for the last method call. EDIT7: We work closely with the the author of Bold Jan Norden and he recently found a bug in the OCL-evaluator in Bold framework. When this was fixed these kinds of exceptions decreased a lot but they still occasionally come. But it is a big relief that this is almost solved.

    Read the article

  • What strategy do you use for package naming in Java projects and why?

    - by Tim Visher
    I thought about this awhile ago and it recently resurfaced as my shop is doing its first real Java web app. As an intro, I see two main package naming strategies. (To be clear, I'm not referring to the whole 'domain.company.project' part of this, I'm talking about the package convention beneath that.) Anyway, the package naming conventions that I see are as follows: Functional: Naming your packages according to their function architecturally rather than their identity according to the business domain. Another term for this might be naming according to 'layer'. So, you'd have a *.ui package and a *.domain package and a *.orm package. Your packages are horizontal slices rather than vertical. This is much more common than logical naming. In fact, I don't believe I've ever seen or heard of a project that does this. This of course makes me leery (sort of like thinking that you've come up with a solution to an NP problem) as I'm not terribly smart and I assume everyone must have great reasons for doing it the way they do. On the other hand, I'm not opposed to people just missing the elephant in the room and I've never heard a an actual argument for doing package naming this way. It just seems to be the de facto standard. Logical: Naming your packages according to their business domain identity and putting every class that has to do with that vertical slice of functionality into that package. I have never seen or heard of this, as I mentioned before, but it makes a ton of sense to me. I tend to approach systems vertically rather than horizontally. I want to go in and develop the Order Processing system, not the data access layer. Obviously, there's a good chance that I'll touch the data access layer in the development of that system, but the point is that I don't think of it that way. What this means, of course, is that when I receive a change order or want to implement some new feature, it'd be nice to not have to go fishing around in a bunch of packages in order to find all the related classes. Instead, I just look in the X package because what I'm doing has to do with X. From a development standpoint, I see it as a major win to have your packages document your business domain rather than your architecture. I feel like the domain is almost always the part of the system that's harder to grok where as the system's architecture, especially at this point, is almost becoming mundane in its implementation. The fact that I can come to a system with this type of naming convention and instantly from the naming of the packages know that it deals with orders, customers, enterprises, products, etc. seems pretty darn handy. It seems like this would allow you to take much better advantage of Java's access modifiers. This allows you to much more cleanly define interfaces into subsystems rather than into layers of the system. So if you have an orders subsystem that you want to be transparently persistent, you could in theory just never let anything else know that it's persistent by not having to create public interfaces to its persistence classes in the dao layer and instead packaging the dao class in with only the classes it deals with. Obviously, if you wanted to expose this functionality, you could provide an interface for it or make it public. It just seems like you lose a lot of this by having a vertical slice of your system's features split across multiple packages. I suppose one disadvantage that I can see is that it does make ripping out layers a little bit more difficult. Instead of just deleting or renaming a package and then dropping a new one in place with an alternate technology, you have to go in and change all of the classes in all of the packages. However, I don't see this is a big deal. It may be from a lack of experience, but I have to imagine that the amount of times you swap out technologies pales in comparison to the amount of times you go in and edit vertical feature slices within your system. So I guess the question then would go out to you, how do you name your packages and why? Please understand that I don't necessarily think that I've stumbled onto the golden goose or something here. I'm pretty new to all this with mostly academic experience. However, I can't spot the holes in my reasoning so I'm hoping you all can so that I can move on. Thanks in advance!

    Read the article

  • Can't connect to samba

    - by Rick
    Windows 7, connecting to Samba shares I have a follow up question from the link above. I am running Samba 3.0.23d on FreeBSD is release 7.1 I changed the policies as described above but still cannot connect to the samba server with the windows 7 or a server 2008. I feel it is a problem with recognizing the new machines on the network. the windows machines can see the samba server, but cannot connect to it or view any of the files. After changing the security policies the samba server asked for network id and password but would not allow the machine to connect, said they were unknown username or bad password. Here is my current config file. there is no sign of encryption anywhere, should I just add the line? not sure what that would do elsewhere. Workgroup = WWOFFSET server string = WWO File Server (%v) security = server username map = /usr/local/etc/smb.users hosts allow = 10. 127. # If you want to automatically load your printer list rather # than setting them up individually then you'll need this ; load printers = yes # you may wish to override the location of the printcap file ; printcap name = /etc/printcap # on SystemV system setting printcap name to lpstat should allow # you to automatically obtain a printer list from the SystemV spool # system ; printcap name = lpstat # It should not be necessary to specify the print system type unless # it is non-standard. Currently supported print systems include: # bsd, cups, sysv, plp, lprng, aix, hpux, qnx ; printing = cups # Uncomment this if you want a guest account, you must add this to /etc/passwd # otherwise the user "nobody" is used ; guest account = pcguest # this tells Samba to use a separate log file for each machine # that connects log file = /var/log/samba/log.%m # Put a capping on the size of the log files (in Kb). max log size = 50 # Use password server option only with security = server # The argument list may include: # password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name] # or to auto-locate the domain controller/s # password server = * ; password server = <NT-Server-Name> password server = SERVER0 # Use the realm option only with security = ads # Specifies the Active Directory realm the host is part of ; realm = MY_REALM # Backend to store user information in. New installations should # use either tdbsam or ldapsam. smbpasswd is available for backwards # compatibility. tdbsam requires no further configuration. ; passdb backend = tdbsam ; passdb backend = smbpasswd # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting. # Note: Consider carefully the location in the configuration file of # this line. The included file is read at that point. ; include = /usr/local/etc/smb.conf.%m # Most people will find that this option gives better performance. # See the chapter 'Samba performance issues' in the Samba HOWTO Collection # and the manual pages for details. # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY # Configure Samba to use multiple interfaces # If you have multiple network interfaces then you must list them # here. See the man page for details. ; interfaces = 192.168.12.2/24 192.168.13.2/24 # Browser Control Options: # set local master to no if you don't want Samba to become a master # browser on your network. Otherwise the normal election rules apply ; local master = no # OS Level determines the precedence of this server in master browser # elections. The default value should be reasonable ; os level = 33 # Domain Master specifies Samba to be the Domain Master Browser. This # allows Samba to collate browse lists between subnets. Don't use this # if you already have a Windows NT domain controller doing this job ; domain master = yes # Preferred Master causes Samba to force a local browser election on startup # and gives it a slightly higher chance of winning the election ; preferred master = yes # Enable this if you want Samba to be a domain logon server for # Windows95 workstations. ; domain logons = yes # if you enable domain logons then you may want a per-machine or # per user logon script # run a specific logon batch file per workstation (machine) ; logon script = %m.bat # run a specific logon batch file per username ; logon script = %U.bat # Where to store roving profiles (only for Win95 and WinNT) # %L substitutes for this servers netbios name, %U is username # You must uncomment the [Profiles] share below ; logon path = \\%L\Profiles\%U # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server ; wins support = yes # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # WINS Proxy - Tells Samba to answer name resolution queries on # behalf of a non WINS capable client, for this to work there must be # at least one WINS Server on the network. The default is NO. ; wins proxy = yes # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names # via DNS nslookups. The default is NO. dns proxy = no # charset settings ; display charset = ASCII ; unix charset = ASCII ; dos charset = ASCII # These scripts are used on a domain controller or stand-alone # machine to add or delete corresponding unix accounts ; add user script = /usr/sbin/useradd %u ; add group script = /usr/sbin/groupadd %g ; add machine script = /usr/sbin/adduser -n -g machines -c Machine -d /dev/null -s /bin/false %u ; delete user script = /usr/sbin/userdel %u ; delete user from group script = /usr/sbin/deluser %u %g ; delete group script = /usr/sbin/groupdel %g unix extensions = no

    Read the article

  • How to diagnose failing 6Gbps SATA connection?

    - by whitequark
    I have a Samsung RC530 notebook and OCZ Vertex-3 6Gbps SATA SSD working in AHCI mode. # dmesg | grep DMI SAMSUNG ELECTRONICS CO., LTD. RC530/RC730/RC530/RC730, BIOS 03WD.M008.20110927.PSA 09/27/2011 # lspci -nn 00:1f.2 SATA controller [0106]: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller [8086:1c03] (rev 04) # sdparm -a /dev/sda /dev/sda: ATA OCZ-VERTEX3 2.15 At the boot, the following messages are present in dmesg (I am running Debian wheezy @ Linux 3.2.8): # dmesg | grep -iE '(ata|ahci)' [ 5.179783] ahci 0000:00:1f.2: version 3.0 [ 5.179802] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 5.179864] ahci 0000:00:1f.2: irq 42 for MSI/MSI-X [ 5.195424] ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x5 impl SATA mode [ 5.195429] ahci 0000:00:1f.2: flags: 64bit ncq sntf pm led clo pio slum part ems apst [ 5.195436] ahci 0000:00:1f.2: setting latency timer to 64 [ 5.204035] scsi0 : ahci [ 5.204301] scsi1 : ahci [ 5.204447] scsi2 : ahci [ 5.204592] scsi3 : ahci [ 5.204682] scsi4 : ahci [ 5.204799] scsi5 : ahci [ 5.204917] ata1: SATA max UDMA/133 abar m2048@0xf7c06000 port 0xf7c06100 irq 42 [ 5.204920] ata2: DUMMY [ 5.204923] ata3: SATA max UDMA/133 abar m2048@0xf7c06000 port 0xf7c06200 irq 42 [ 5.204924] ata4: DUMMY [ 5.204926] ata5: DUMMY [ 5.204927] ata6: DUMMY [ 5.523039] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 5.525911] ata3.00: ATAPI: TSSTcorp CDDVDW SN-208BB, SC00, max UDMA/100 [ 5.531006] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 5.533703] ata3.00: configured for UDMA/100 [ 5.542790] ata1.00: ATA-8: OCZ-VERTEX3, 2.15, max UDMA/133 [ 5.542800] ata1.00: 117231408 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 5.552751] ata1.00: configured for UDMA/133 [ 5.553050] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX3 2.15 PQ: 0 ANSI: 5 [ 5.559621] scsi 2:0:0:0: CD-ROM TSSTcorp CDDVDW SN-208BB SC00 PQ: 0 ANSI: 5 [ 5.564059] sd 0:0:0:0: [sda] 117231408 512-byte logical blocks: (60.0 GB/55.8 GiB) [ 5.564127] sd 0:0:0:0: [sda] Write Protect is off [ 5.564131] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 5.564158] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 5.564582] sda: sda1 [ 5.564810] sd 0:0:0:0: [sda] Attached SCSI disk [ 5.572006] sr0: scsi3-mmc drive: 16x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 5.572010] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 5.572189] sr 2:0:0:0: Attached scsi CD-ROM sr0 [ 6.717181] ata1.00: exception Emask 0x50 SAct 0x1 SErr 0x280900 action 0x6 frozen [ 6.717238] ata1.00: irq_stat 0x08000000, interface fatal error [ 6.717291] ata1: SError: { UnrecovData HostInt 10B8B BadCRC } [ 6.717342] ata1.00: failed command: READ FPDMA QUEUED [ 6.717395] ata1.00: cmd 60/50:00:20:39:58/00:00:00:00:00/40 tag 0 ncq 40960 in [ 6.717396] res 40/00:00:20:39:58/00:00:00:00:00/40 Emask 0x50 (ATA bus error) [ 6.717503] ata1.00: status: { DRDY } [ 6.717553] ata1: hard resetting link [ 7.033417] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.055234] ata1.00: configured for UDMA/133 [ 7.055262] ata1: EH complete [ 7.147280] ata1.00: exception Emask 0x10 SAct 0xf8 SErr 0x280100 action 0x6 frozen [ 7.147340] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.147393] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.147460] ata1.00: failed command: READ FPDMA QUEUED [ 7.147529] ata1.00: cmd 60/08:18:88:17:41/00:00:02:00:00/40 tag 3 ncq 4096 in [ 7.147531] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.147691] ata1.00: status: { DRDY } [ 7.147754] ata1.00: failed command: READ FPDMA QUEUED [ 7.147821] ata1.00: cmd 60/00:20:f8:42:4c/01:00:02:00:00/40 tag 4 ncq 131072 in [ 7.147822] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.147977] ata1.00: status: { DRDY } [ 7.148036] ata1.00: failed command: READ FPDMA QUEUED [ 7.148100] ata1.00: cmd 60/50:28:f8:43:4c/00:00:02:00:00/40 tag 5 ncq 40960 in [ 7.148101] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148255] ata1.00: status: { DRDY } [ 7.148315] ata1.00: failed command: READ FPDMA QUEUED [ 7.148379] ata1.00: cmd 60/00:30:50:98:64/01:00:02:00:00/40 tag 6 ncq 131072 in [ 7.148380] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148534] ata1.00: status: { DRDY } [ 7.148593] ata1.00: failed command: READ FPDMA QUEUED [ 7.148657] ata1.00: cmd 60/00:38:50:99:64/01:00:02:00:00/40 tag 7 ncq 131072 in [ 7.148658] res 40/00:38:50:99:64/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.148813] ata1.00: status: { DRDY } [ 7.148875] ata1: hard resetting link [ 7.464842] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.486794] ata1.00: configured for UDMA/133 [ 7.486822] ata1: EH complete [ 7.546395] ata1.00: exception Emask 0x10 SAct 0x2f SErr 0x280100 action 0x6 frozen [ 7.546470] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.546531] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.546588] ata1.00: failed command: READ FPDMA QUEUED [ 7.546648] ata1.00: cmd 60/00:00:e0:4b:61/01:00:02:00:00/40 tag 0 ncq 131072 in [ 7.546649] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.546794] ata1.00: status: { DRDY } [ 7.546847] ata1.00: failed command: READ FPDMA QUEUED [ 7.546906] ata1.00: cmd 60/00:08:90:2f:48/01:00:02:00:00/40 tag 1 ncq 131072 in [ 7.546907] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547053] ata1.00: status: { DRDY } [ 7.547106] ata1.00: failed command: READ FPDMA QUEUED [ 7.547165] ata1.00: cmd 60/00:10:90:30:48/01:00:02:00:00/40 tag 2 ncq 131072 in [ 7.547166] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547310] ata1.00: status: { DRDY } [ 7.547363] ata1.00: failed command: READ FPDMA QUEUED [ 7.547422] ata1.00: cmd 60/00:18:50:c7:64/01:00:02:00:00/40 tag 3 ncq 131072 in [ 7.547423] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547568] ata1.00: status: { DRDY } [ 7.547621] ata1.00: failed command: READ FPDMA QUEUED [ 7.547681] ata1.00: cmd 60/00:28:e0:4c:61/01:00:02:00:00/40 tag 5 ncq 131072 in [ 7.547682] res 40/00:28:e0:4c:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.547825] ata1.00: status: { DRDY } [ 7.547882] ata1: hard resetting link [ 7.864408] ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) [ 7.886351] ata1.00: configured for UDMA/133 [ 7.886375] ata1: EH complete [ 7.890012] ata1: limiting SATA link speed to 3.0 Gbps [ 7.890016] ata1.00: exception Emask 0x10 SAct 0x7 SErr 0x280100 action 0x6 frozen [ 7.890093] ata1.00: irq_stat 0x08000000, interface fatal error [ 7.890152] ata1: SError: { UnrecovData 10B8B BadCRC } [ 7.890210] ata1.00: failed command: READ FPDMA QUEUED [ 7.890272] ata1.00: cmd 60/00:00:90:33:48/01:00:02:00:00/40 tag 0 ncq 131072 in [ 7.890273] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890418] ata1.00: status: { DRDY } [ 7.890472] ata1.00: failed command: READ FPDMA QUEUED [ 7.890530] ata1.00: cmd 60/00:08:90:34:48/01:00:02:00:00/40 tag 1 ncq 131072 in [ 7.890531] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890672] ata1.00: status: { DRDY } [ 7.890724] ata1.00: failed command: READ FPDMA QUEUED [ 7.890781] ata1.00: cmd 60/78:10:e0:4f:61/00:00:02:00:00/40 tag 2 ncq 61440 in [ 7.890782] res 40/00:10:e0:4f:61/00:00:02:00:00/40 Emask 0x10 (ATA bus error) [ 7.890925] ata1.00: status: { DRDY } [ 7.890981] ata1: hard resetting link [ 8.208021] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320) [ 8.230100] ata1.00: configured for UDMA/133 [ 8.230124] ata1: EH complete Looks like the SATA interface tries to use 6Gbps link, then fails miserably and Linux fallbacks to 3Gbps. This is somewhat fine for me, as the system boots successfully each time and works under high load (cd linux-3.2.8; make -j16). I've also ran memtest86+ and it did not find any errors. What concerns me more is that Grub sometimes takes a long time to load the images and/or fails to load itself completely. The error is consistent and is probablistic: that is, each time I boot I have a certain chance to fail. Actually, I have a slight suspiction on the cause of the failure. Look at the cabling: What kind of engineer does it this way? Nah. Even 1Gbps Ethernet hardly tolerates cables bent over a small angle, and there you have 6Gbps SATA. How cound I determine and fix the cause of errors and/or switch the link to 3Gbps mode permanently?

    Read the article

  • Windows Server 2008 R2 network adapter stops working, requires hard reboot

    - by Geoff Dalgas
    TL;DR version: Turns out this was a Windows Server 2008 R2 kernel networking bug. After siccing Microsoft support on it, we (eventually) got an unpublished kernel hotfix from Microsoft to address it. If you, too, are experiencing mysterious low-level network driver failures requiring a reboot/bluescreen cycle, you might want that hotfix (or maybe Service Pack 1 whenever it is released, too.) We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps. Switch Virtualization providers On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change. Switch host model We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model: http://en.wikipedia.org/wiki/Host_model http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx We did that, and.. we'll see.

    Read the article

  • How to set up linux watchdog daemon with Intel 6300esb

    - by ACiD GRiM
    I've been searching for this on Google for sometime now and I have yet to find proper documentation on how to connect the kernel driver for my 6300esb watchdog timer to /dev/watchdog and ensure that watchdog daemon is keeping it alive. I am using RHEL compatible Scientific Linux 6.3 in a KVM virtual machine by the way Below is everything I've tried so far: dmesg|grep 6300 i6300ESB timer: Intel 6300ESB WatchDog Timer Driver v0.04 i6300ESB timer: initialized (0xffffc900008b8000). heartbeat=30 sec (nowayout=0) | ll /dev/watchdog crw-rw----. 1 root root 10, 130 Sep 22 22:25 /dev/watchdog | /etc/watchdog.conf #ping = 172.31.14.1 #ping = 172.26.1.255 #interface = eth0 file = /var/log/messages #change = 1407 # Uncomment to enable test. Setting one of these values to '0' disables it. # These values will hopefully never reboot your machine during normal use # (if your machine is really hung, the loadavg will go much higher than 25) max-load-1 = 24 max-load-5 = 18 max-load-15 = 12 # Note that this is the number of pages! # To get the real size, check how large the pagesize is on your machine. #min-memory = 1 #repair-binary = /usr/sbin/repair #test-binary = #test-timeout = watchdog-device = /dev/watchdog # Defaults compiled into the binary #temperature-device = #max-temperature = 120 # Defaults compiled into the binary #admin = root interval = 10 #logtick = 1 # This greatly decreases the chance that watchdog won't be scheduled before # your machine is really loaded realtime = yes priority = 1 # Check if syslogd is still running by enabling the following line #pidfile = /var/run/syslogd.pid Now maybe I'm not testing it correctly, but I would expecting that stopping the watchdog service would cause the /dev/watchdog to time out after 30 seconds and I should see the host reboot, however this does not happen. Also, here is my config for the KVM vm <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit sl6template or other application using the libvirt API. --> <domain type='kvm'> <name>sl6template</name> <uuid>960d0ac2-2e6a-5efa-87a3-6bb779e15b6a</uuid> <memory unit='KiB'>262144</memory> <currentMemory unit='KiB'>262144</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>Westmere</model> <vendor>Intel</vendor> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='vme'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='rdtscp'/> <feature policy='require' name='ht'/> <feature policy='require' name='dca'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='pdpe1gb'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='require' name='aes'/> </cpu> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/data/vms/sl6template.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:44:57:f6'/> <source bridge='br0.2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:88:0f:42'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <watchdog model='i6300esb' action='reset'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </watchdog> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain> Any help is appreciated as the most I've found are patches to kvm and general softdog documentation or IPMI watchdog answers.

    Read the article

  • $_GET loading content before head tag instead of in specified div.

    - by s32ialx
    NOT EDITING BELOW BUT THANKS TO SOME REALLY NICE PEOPLE I CAN'T POST AN IMAGE ANYMORE BECAUSE I HAD a 15 Rep but NOW ONLY A 5 becuase my question wasn't what they wanted help with they gave me a neg rep. The problem is that the content loads it displays UNDER the div i placed #CONTENT# inside so the styles are being ignored and it's posting #CONTENT# outside the divs at positions 0,0 any suggestions? Found out whats happening by using "View Source" seems that it's putting all of the #CONTENT#, content that's being loaded in front of the <head> tag. Like this <doctype...> <div class="home"> \ blah blah #CONTENT# bot being loaded in correct specified area </div> / <head> <script src=""></script> </head> <body> <div class="header"></div> <div class="contents"> #CONTENT# < where content SHOULD load </div> <div class="footer"></div> </body> so anyone got a fix? OK so a better description I'll add relevant screen-shots Whats happening is /* file.class.php */ <?php $file = new file(); class file{ var $path = "templates/clean"; var $ext = "tpl"; function loadfile($filename){ return file_get_contents($this->path . "/" . $filename . "." . $this->ext); } function setcontent($content,$newcontent,$vartoreplace='#CONTENT#'){ $val = str_replace($vartoreplace,$newcontent,$content); return $val; } function p($content) { $v = $content; $v = str_replace('#CONTENT#','',$v); print $v; } } if(!isset($_GET['page'])){ // if not, lets load our index page(you can change home.php to whatever you want: include("main.txt"); // else $_GET['page'] was set so lets do stuff: } else { // lets first check if the file exists: if(file_exists($_GET['page'].'.txt')){ // and lets include that then: include($_GET['page'].'.txt'); // sorry mate, could not find it: } else { echo 'Sorry, could not find <strong>' . $_GET['page'] .'.txt</strong>'; } } ?> is calling for a file_get_contents at the bottom which I use in /* index.php */ <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <?php include('classes/file.class.php'); // load the templates $header = $file->loadfile('header'); $body = $file->loadfile('body'); $footer = $file->loadfile('footer'); // fill body.tpl #CONTENT# slot with $content $body = $file->setcontent($body, $content); // cleanup and output the full page $file->p($header . $body . $footer); ?> and loads into /* body.tpl */ <div id="bodys"> <div id="bodt"></div> <div id="bodm"> <div id="contents"> #CONTENT# </div> </div> <div id="bodb"></div> </div> but the issue is as follows the $content loads properly img tags etc <h2> tags etc but CSS styling is TOTALY ignored for position width z-index etc. and as follows here's the screen-shot My Firefox Showing The Problem In Action REPOSTED DUE TO PEOPLE NOT HELPING AND JUST BEING ARROGANT AND GIVING NEGATIVE VOTES and not even saying a word. DO NOT COMMENT UNLESS YOU PLAN TO HELP god I'm a beginner and with you people giving me bad reviews this won't make me help you out when the chance comes.

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe Irony

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • SortList duplicated key, but it shouldn't

    - by Luca
    I have a class which implements IList interface. I requires a "sorted view" of this list, but without modifying it (I cannot sort directly the IList class). These view shall be updated when the original list is modified, keeping items sorted. So, I've introduced a SortList creation method which create a SortList which has a comparer for the specific object contained in the original list. Here is the snippet of code: public class MyList<T> : ICollection, IList<T> { ... public SortedList CreateSortView(string property) { try { Lock(); SortListView sortView; if (mSortListViews.ContainsKey(property) == false) { // Create sorted view sortView = new SortListView(property, Count); mSortListViews.Add(property, sortView); foreach (T item in Items) sortView.Add(item); } else sortView = mSortListViews[property]; sortView.ReferenceCount++; return (sortView); } finally { Unlock(); } } public void DeleteSortView(string property) { try { Lock(); // Unreference sorted view mSortListViews[property].ReferenceCount--; // Remove sorted view if (mSortListViews[property].ReferenceCount == 0) mSortListViews.Remove(property); } finally { Unlock(); } } protected class SortListView : SortedList { /// <summary> /// /// </summary> /// <param name="property"></param> /// <param name="capacity"></param> public SortListView(string property, int capacity) : base(new GenericPropertyComparer(typeof(T).GetProperty(property, BindingFlags.Instance | BindingFlags.Public)), capacity) { } /// <summary> /// Reference count. /// </summary> public int ReferenceCount = 0; /// <summary> /// /// </summary> /// <param name="item"></param> public void Add(T item) { Add(item, item); } /// <summary> /// /// </summary> /// <param name="item"></param> public void Remove(T item) { // Base implementation base.Remove(item); } /// <summary> /// Compare object on a generic property. /// </summary> class GenericPropertyComparer : IComparer { #region Constructors /// <summary> /// Construct a GenericPropertyComparer specifying the property to compare. /// </summary> /// <param name="property"> /// A <see cref="PropertyInfo"/> which specify the property to be compared. /// </param> /// <remarks> /// The <paramref name="property"/> parameter imply that the compared objects have the specified property. The property /// must be readable, and its type must implement the IComparable interface. /// </remarks> public GenericPropertyComparer(PropertyInfo property) { if (property == null) throw new ArgumentException("property doesn't specify a valid property"); if (property.CanRead == false) throw new ArgumentException("property specify a write-only property"); if (property.PropertyType.GetInterface("IComparable") == null) throw new ArgumentException("property type doesn't IComparable"); mSortingProperty = property; } #endregion #region IComparer Implementation public int Compare(object x, object y) { IComparable propX = (IComparable)mSortingProperty.GetValue(x, null); IComparable propY = (IComparable)mSortingProperty.GetValue(y, null); return (propX.CompareTo(propY)); } /// <summary> /// Sorting property. /// </summary> private PropertyInfo mSortingProperty = null; #endregion } } /// <summary> /// Sorted views of this ReactList. /// </summary> private Dictionary<string, SortListView> mSortListViews = new Dictionary<string, SortListView>(); } Practically, class users request to create a SortListView specifying the name of property which determine the sorting, and using the reflection each SortListView defined a IComparer which keep sorted the items. Whenever an item is added or removed from the original list, every created SortListView will be updated with the same operation. This seems good at first chance, but it creates me problems since it give me the following exception when adding items to the SortList: System.ArgumentException: Item has already been added. Key in dictionary: 'PowerShell_ISE [C:\Windows\sysWOW64\WindowsPowerShell\v1.0\PowerShell_ISE.exe]' Key being added: 'PowerShell_ISE [C:\Windows\system32\WindowsPowerShell\v1.0\PowerShell_ISE.exe]' As you can see from the exception message, thrown by SortedListView.Add(object), the string representation of the key (the list item object) is different (note the path of the executable). Why SortList give me that exception? To solve this I tried to implement a GetHashCode implementation for the underlying object, but without success: public override int GetHashCode() { return ( base.GetHashCode() ^ mApplicationName.GetHashCode() ^ mApplicationPath.GetHashCode() ^ mCommandLine.GetHashCode() ^ mWorkingDirectory.GetHashCode() ); }

    Read the article

  • Network throughput issue (ARP-related)

    - by Joel Coel
    The small college where I work is having some very strange network issues. I'm looking for any advice or ideas here. We were fine over the summer, but the trouble began few days after students returned to campus in force for the fall term. Symptoms The main symptom is that internet access will work, but it's very slow... often to the point of timeouts. As an example, a typical result from Speedtest.net will return .4Mbps download, but allow 3 to 8 Mbps upload speed. Lesser symptoms may include severely limited performance transferring data to and from our file server, or even in some cases the inability to log in to the computer (cannot reach the domain controller). The issue crosses multiple vlans, and has effected devices on nearly every vlan we operate. The issue does not impact all machines on the network. An unaffected machine will typically see at least 11Mbps download from speedtest.net, and perhaps much more depending on larger campus traffic patterns at the time. There is one variation on the larger issue. We have one vlan where users were unable to log into nearly all of the machines at all. IT staff would log in using a local administrator account (or in some cases cached credentials), and from there a release/renew or pinging the gateway would allow the machine to work... for a while. Complicating this issue is that this vlan covers our computer labs, which use software called Deep Freeze to completely reset the hard drives after a reboot. It could just the same issue manifesting differently because of stale data on machines that have not permanently altered low-level info for weeks. We were able to solve this, however, by creating a new vlan and moving the labs over to the new vlan wholesale. Instigations Eventually we noticed that the effected machines all had recent dhcp leases. We can predict when a machine will become "slow" by watching when a dhcp lease comes up for renewal. We played with setting the lease time very short for a test vlan, but all that did was remove our ability to predict when the machine would become slow. Machines with static IPs have pretty much always worked normally. Manually releasing/renewing an address will never cause a machine to become slow. In fact, in some cases this process has fixed a machine in that state. Most of the time, though, it doesn't help. We also noticed that mobile machines like laptops are likely to become slow when they cross to new vlans. Wireless on campus is divided up into "zones", where each zone maps to a small set of buildings. Moving to a new building can place you in a zone, thereby causing you to get a new address. A machine resuming from sleep mode is also very likely to be slow. Mitigations Sometimes, but not always, clearing the arp cache on an effected machine will allow it to work normally again. As already mentioned, releasing/renewing a local machine's IP address can fix that machine, but it's not guaranteed. Pinging the default gateway can also sometimes help with a slow machine. What seems to help most to mitigate the issue is clearing the arp cache on our core layer-3 switch. This switch is used for our dhcp system as the default gateway on all vlans, and it handles inter-vlan routing. The model is a 3Com 4900SX. To try to mitigate the issue, we have the cache timeout set on the switch all the way down to the lowest possible time, but it hasn't helped. I also put together a script that runs every few minutes to automatically connect to the switch and reset the cache. Unfortunately, this does not always work, and can even cause some machines to end up in the slow state for a short time (though these seem to correct themselves after a few minutes). We currently have a scheduled job that runs every 10 minutes to force the core switch to clear it's ARP cache, but this is far from perfect or desirable. Reproduction We now have a test machine that we can force into the slow state at will. It is connected to a switch with ports set up for each of our vlans. We make the machine slow by connecting to different vlans, and after a new connection or two it will be slow. It's also worth noting in this section that this has happened before at the start of prior terms, but in the past the problem has gone away on it's own after a few days. It solved itself before we had a chance to do much diagnostic work... hence why we've allowed it to drag so long into the term this time 'round; the expectation was this would be a short-lived situation. Other Factors It's worth mentioning that we have had about half a dozen switches just outright fail over the last year. These are mainly 2003/2004-era 3Coms (mostly 4200's) that were all put in at about the same time. They should still be covered under warranty, buy HP has made getting service somewhat difficult. Mostly in power supplies that have failed, but in a couple cases we have used a power supply from a switch with a failed mainboard to bring a switch with a failed power supply back to life. We do have UPS devices on all but three of four switches now, but that was not the case when I started two and a half years ago. Severe budget constraints (we were on the Dept. of Ed's financially challenged institutions list a couple years back) have forced me to look to the likes of Netgear and TrendNet for replacements, but so far these low-end models seem to be holding their own. It's also worth mentioning that the big change on our network this summer was migrating from a single cross-campus wireless SSID to the zoned approach mentioned earlier. I don't think this is the source of the issue, as like I've said: we've seen this before. However, it's possible this is exacerbating the issue, and may be much of the reason it's been so hard to isolate. Diagnosis At first it seemed clear to us, given the timing and persistent nature of the problem, that the source of the issue was an infected (or malicious) student machine doing ARP cache poisoning. However, repeated attempts to isolate the source have failed. Those attempts include numerous wireshark packet traces, and even taking entire buildings offline for brief periods. We have not been able even to find a smoking gun bad ARP entry. My current best guess is an overloaded or failing core switch, but I'm not sure on how to test for this, and the cost of replacing it blindly is steep. Again, any ideas appreciated.

    Read the article

  • Multiset of shared_ptrs as a dynamic priority queue: Concept and practice

    - by Sarah
    I was using a vector-based priority queue typedef std::priority_queue< Event, vector< Event >, std::greater< Event > > EventPQ; to manage my Event objects. Now my simulation has to be able to find and delete certain Event objects not at the top of the queue. I'd like to know if my planned work-around can do what I need it to, and if I have the syntax right. I'd also like to know if dramatically better solutions exist. My plan is to make EventPQ a multiset of smart pointers to Event objects: typedef std::multi_set< boost::shared_ptr< Event > > EventPQ; I'm borrowing functions of the Event class from a related post on a multimap priority queue. // Event.h #include <cstdlib> using namespace std; #include <set> #include <boost/shared_ptr.hpp> class Event; typedef std::multi_set< boost::shared_ptr< Event > > EventPQ; class Event { public: Event( double t, int eid, int hid ); ~Event(); void add( EventPQ& q ); void remove(); bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } bool operator > ( const Event & rhs ) const { return ( time > rhs.time ); } double time; int eventID; int hostID; EventPQ* mq; EventPQ::iterator mIt; }; // Event.cpp Event::Event( double t, int eid, int hid ) { time = t; eventID = eid; hostID = hid; } Event::~Event() {} void Event::add( EventPQ& q ) { mq = &q; mIt = q.insert( boost::shared_ptr<Event>(this) ); } void Event::remove() { mq.erase( mIt ); mq = 0; mIt = EventPQ::iterator(); } I was hoping that by making EventPQ a container of pointers, I could avoid wasting time copying Events into the container and avoid accidentally editing the wrong copy. Would it be dramatically easier to store the Events themselves in EventPQ instead? Does it make more sense to remove the time keys from Event objects and use them instead as keys in a multimap? Assuming the current implementation seems okay, my questions are: Do I need to specify how to sort on the pointers, rather than the objects, or does the multiset automatically know to sort on the objects pointed to? If I have a shared_ptr ptr1 to an Event that also has a pointer in the EventPQ container, how do I find and delete the corresponding pointer in EventPQ? Is it enough to .find( ptr1 ), or do I instead have to find by the key (time)? Is the Event::remove() sufficient for removing the pointer in the EventPQ container? There's a small chance multiple events could be created with the same time (obviously implied in the use of multiset). If the find() works on event times, to avoid accidentally deleting the wrong event, I was planning to throw in a further check on eventID and hostID. Does this seem reasonable? (Dumb syntax question) In Event.h, is the declaration of dummy class Event;, then the EventPQ typedef, and then the real class Event declaration appropriate? I'm obviously an inexperienced programmer with very spotty background--this isn't for homework. Would love suggestions and explanations. Please let me know if any part of this is confusing. Thanks.

    Read the article

  • C#/.NET Little Wonders: The Joy of Anonymous Types

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. In the .NET 3 Framework, Microsoft introduced the concept of anonymous types, which provide a way to create a quick, compiler-generated types at the point of instantiation.  These may seem trivial, but are very handy for concisely creating lightweight, strongly-typed objects containing only read-only properties that can be used within a given scope. Creating an Anonymous Type In short, an anonymous type is a reference type that derives directly from object and is defined by its set of properties base on their names, number, types, and order given at initialization.  In addition to just holding these properties, it is also given appropriate overridden implementations for Equals() and GetHashCode() that take into account all of the properties to correctly perform property comparisons and hashing.  Also overridden is an implementation of ToString() which makes it easy to display the contents of an anonymous type instance in a fairly concise manner. To construct an anonymous type instance, you use basically the same initialization syntax as with a regular type.  So, for example, if we wanted to create an anonymous type to represent a particular point, we could do this: 1: var point = new { X = 13, Y = 7 }; Note the similarity between anonymous type initialization and regular initialization.  The main difference is that the compiler generates the type name and the properties (as readonly) based on the names and order provided, and inferring their types from the expressions they are assigned to. It is key to remember that all of those factors (number, names, types, order of properties) determine the anonymous type.  This is important, because while these two instances share the same anonymous type: 1: // same names, types, and order 2: var point1 = new { X = 13, Y = 7 }; 3: var point2 = new { X = 5, Y = 0 }; These similar ones do not: 1: var point3 = new { Y = 3, X = 5 }; // different order 2: var point4 = new { X = 3, Y = 5.0 }; // different type for Y 3: var point5 = new {MyX = 3, MyY = 5 }; // different names 4: var point6 = new { X = 1, Y = 2, Z = 3 }; // different count Limitations on Property Initialization Expressions The expression for a property in an anonymous type initialization cannot be null (though it can evaluate to null) or an anonymous function.  For example, the following are illegal: 1: // Null can't be used directly. Null reference of what type? 2: var cantUseNull = new { Value = null }; 3:  4: // Anonymous methods cannot be used. 5: var cantUseAnonymousFxn = new { Value = () => Console.WriteLine(“Can’t.”) }; Note that the restriction on null is just that you can’t use it directly as the expression, because otherwise how would it be able to determine the type?  You can, however, use it indirectly assigning a null expression such as a typed variable with the value null, or by casting null to a specific type: 1: string str = null; 2: var fineIndirectly = new { Value = str }; 3: var fineCast = new { Value = (string)null }; All of the examples above name the properties explicitly, but you can also implicitly name properties if they are being set from a property, field, or variable.  In these cases, when a field, property, or variable is used alone, and you don’t specify a property name assigned to it, the new property will have the same name.  For example: 1: int variable = 42; 2:  3: // creates two properties named varriable and Now 4: var implicitProperties = new { variable, DateTime.Now }; Is the same type as: 1: var explicitProperties = new { variable = variable, Now = DateTime.Now }; But this only works if you are using an existing field, variable, or property directly as the expression.  If you use a more complex expression then the name cannot be inferred: 1: // can't infer the name variable from variable * 2, must name explicitly 2: var wontWork = new { variable * 2, DateTime.Now }; In the example above, since we typed variable * 2, it is no longer just a variable and thus we would have to assign the property a name explicitly. ToString() on Anonymous Types One of the more trivial overrides that an anonymous type provides you is a ToString() method that prints the value of the anonymous type instance in much the same format as it was initialized (except actual values instead of expressions as appropriate of course). For example, if you had: 1: var point = new { X = 13, Y = 42 }; And then print it out: 1: Console.WriteLine(point.ToString()); You will get: 1: { X = 13, Y = 42 } While this isn’t necessarily the most stunning feature of anonymous types, it can be handy for debugging or logging values in a fairly easy to read format. Comparing Anonymous Type Instances Because anonymous types automatically create appropriate overrides of Equals() and GetHashCode() based on the underlying properties, we can reliably compare two instances or get hash codes.  For example, if we had the following 3 points: 1: var point1 = new { X = 1, Y = 2 }; 2: var point2 = new { X = 1, Y = 2 }; 3: var point3 = new { Y = 2, X = 1 }; If we compare point1 and point2 we’ll see that Equals() returns true because they overridden version of Equals() sees that the types are the same (same number, names, types, and order of properties) and that the values are the same.   In addition, because all equal objects should have the same hash code, we’ll see that the hash codes evaluate to the same as well: 1: // true, same type, same values 2: Console.WriteLine(point1.Equals(point2)); 3:  4: // true, equal anonymous type instances always have same hash code 5: Console.WriteLine(point1.GetHashCode() == point2.GetHashCode()); However, if we compare point2 and point3 we get false.  Even though the names, types, and values of the properties are the same, the order is not, thus they are two different types and cannot be compared (and thus return false).  And, since they are not equal objects (even though they have the same value) there is a good chance their hash codes are different as well (though not guaranteed): 1: // false, different types 2: Console.WriteLine(point2.Equals(point3)); 3:  4: // quite possibly false (was false on my machine) 5: Console.WriteLine(point2.GetHashCode() == point3.GetHashCode()); Using Anonymous Types Now that we’ve created instances of anonymous types, let’s actually use them.  The property names (whether implicit or explicit) are used to access the individual properties of the anonymous type.  The main thing, once again, to keep in mind is that the properties are readonly, so you cannot assign the properties a new value (note: this does not mean that instances referred to by a property are immutable – for more information check out C#/.NET Fundamentals: Returning Data Immutably in a Mutable World). Thus, if we have the following anonymous type instance: 1: var point = new { X = 13, Y = 42 }; We can get the properties as you’d expect: 1: Console.WriteLine(“The point is: ({0},{1})”, point.X, point.Y); But we cannot alter the property values: 1: // compiler error, properties are readonly 2: point.X = 99; Further, since the anonymous type name is only known by the compiler, there is no easy way to pass anonymous type instances outside of a given scope.  The only real choices are to pass them as object or dynamic.  But really that is not the intention of using anonymous types.  If you find yourself needing to pass an anonymous type outside of a given scope, you should really consider making a POCO (Plain Old CLR Type – i.e. a class that contains just properties to hold data with little/no business logic) instead. Given that, why use them at all?  Couldn’t you always just create a POCO to represent every anonymous type you needed?  Sure you could, but then you might litter your solution with many small POCO classes that have very localized uses. It turns out this is the key to when to use anonymous types to your advantage: when you just need a lightweight type in a local context to store intermediate results, consider an anonymous type – but when that result is more long-lived and used outside of the current scope, consider a POCO instead. So what do we mean by intermediate results in a local context?  Well, a classic example would be filtering down results from a LINQ expression.  For example, let’s say we had a List<Transaction>, where Transaction is defined something like: 1: public class Transaction 2: { 3: public string UserId { get; set; } 4: public DateTime At { get; set; } 5: public decimal Amount { get; set; } 6: // … 7: } And let’s say we had this data in our List<Transaction>: 1: var transactions = new List<Transaction> 2: { 3: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = 2200.00m }, 4: new Transaction { UserId = "Jim", At = DateTime.Now, Amount = -1100.00m }, 5: new Transaction { UserId = "Jim", At = DateTime.Now.AddDays(-1), Amount = 900.00m }, 6: new Transaction { UserId = "John", At = DateTime.Now.AddDays(-2), Amount = 300.00m }, 7: new Transaction { UserId = "John", At = DateTime.Now, Amount = -10.00m }, 8: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = 200.00m }, 9: new Transaction { UserId = "Jane", At = DateTime.Now, Amount = -50.00m }, 10: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = -100.00m }, 11: new Transaction { UserId = "Jaime", At = DateTime.Now.AddDays(-3), Amount = 300.00m }, 12: }; So let’s say we wanted to get the transactions for each day for each user.  That is, for each day we’d want to see the transactions each user performed.  We could do this very simply with a nice LINQ expression, without the need of creating any POCOs: 1: // group the transactions based on an anonymous type with properties UserId and Date: 2: byUserAndDay = transactions 3: .GroupBy(tx => new { tx.UserId, tx.At.Date }) 4: .OrderBy(grp => grp.Key.Date) 5: .ThenBy(grp => grp.Key.UserId); Now, those of you who have attempted to use custom classes as a grouping type before (such as GroupBy(), Distinct(), etc.) may have discovered the hard way that LINQ gets a lot of its speed by utilizing not on Equals(), but also GetHashCode() on the type you are grouping by.  Thus, when you use custom types for these purposes, you generally end up having to write custom Equals() and GetHashCode() implementations or you won’t get the results you were expecting (the default implementations of Equals() and GetHashCode() are reference equality and reference identity based respectively). As we said before, it turns out that anonymous types already do these critical overrides for you.  This makes them even more convenient to use!  Instead of creating a small POCO to handle this grouping, and then having to implement a custom Equals() and GetHashCode() every time, we can just take advantage of the fact that anonymous types automatically override these methods with appropriate implementations that take into account the values of all of the properties. Now, we can look at our results: 1: foreach (var group in byUserAndDay) 2: { 3: // the group’s Key is an instance of our anonymous type 4: Console.WriteLine("{0} on {1:MM/dd/yyyy} did:", group.Key.UserId, group.Key.Date); 5:  6: // each grouping contains a sequence of the items. 7: foreach (var tx in group) 8: { 9: Console.WriteLine("\t{0}", tx.Amount); 10: } 11: } And see: 1: Jaime on 06/18/2012 did: 2: -100.00 3: 300.00 4:  5: John on 06/19/2012 did: 6: 300.00 7:  8: Jim on 06/20/2012 did: 9: 900.00 10:  11: Jane on 06/21/2012 did: 12: 200.00 13: -50.00 14:  15: Jim on 06/21/2012 did: 16: 2200.00 17: -1100.00 18:  19: John on 06/21/2012 did: 20: -10.00 Again, sure we could have just built a POCO to do this, given it an appropriate Equals() and GetHashCode() method, but that would have bloated our code with so many extra lines and been more difficult to maintain if the properties change.  Summary Anonymous types are one of those Little Wonders of the .NET language that are perfect at exactly that time when you need a temporary type to hold a set of properties together for an intermediate result.  While they are not very useful beyond the scope in which they are defined, they are excellent in LINQ expressions as a way to create and us intermediary values for further expressions and analysis. Anonymous types are defined by the compiler based on the number, type, names, and order of properties created, and they automatically implement appropriate Equals() and GetHashCode() overrides (as well as ToString()) which makes them ideal for LINQ expressions where you need to create a set of properties to group, evaluate, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,Anonymous Types,LINQ

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 90 91 92 93 94 95 96  | Next Page >