Search Results

Search found 7418 results on 297 pages for 'argument passing'.

Page 286/297 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Alcatel-Lucent: Enterprise 2.0: The Top 5 Things I would Do Over

    - by Kellsey Ruppel
    Happy Monday! Does anyone else feel as if the weekend went entirely too quickly? At least for those of us in the United States, we have the 4th of July Holiday next week to look forward to This week on the blog, we are going to focus on "WebCenter by Example" and highlight best practices from customers and partners. I recently came across this article and I think this is a great example of how we can learn from one another when it comes to social collaboration adoption. Do you agree with Jem? What things or best practices have you learned in your organizations?  By Jem Janik, Enterprise community manager, Alcatel-Lucent  Not so long ago, Engage, the Alcatel-Lucent employee social network and collaboration platform, celebrated its third birthday. With more than 25,000 members actively interacting each month, Engage has been a big enough success that it’s been the subject of external articles, and often those of us who helped launch it will go out and speak about what aspects contributed to that success. Hindsight is still 20/20 and what it takes to successfully launch an enterprise 2.0 community is fairly well-known now.  Today I want to tell you what I suspect you really want to know about.  As the enterprise community manager for Engage, after three years in, what are the top 5 things I wish we (and I mostly mean me) could do over? #5 Define your analytics solution from the start There is so much to do when you launch a community and initially growing it without complete chaos is quite a task.  It doesn’t take too long to get to a point where you want to focus your continued efforts in growing company collaboration.  Do people truly talk across regional boundaries or have we shifted siloed conversations to a new platform.  Is there one organization that doesn’t interact with another? If you are lucky you’ll have someone in your community team well versed in the world of databases and SQL queries, but it takes time to figure out what backend analytics data actually means. Professional support can be expensive and it may be hard to justify later as it typically has the community manager as the only main customer.  Figure out what you think you’ll want to know and how to get it early on. The sooner the better even if it doesn’t seem that critical at the time. #4 Lobbies guide you to the right places One piece of feedback that comes up more and more as we keep growing Engage is it’s hard to find stuff, or new people are not sure where to start. Something we’re doing now is defining some general topic areas of interest to be like “lobbies” into the platform and some common hashtags to go with them. I liken this to walking into a large medical or professional building for the first time.  There are hundreds of offices, and you look to a sign in the lobby to get guided to the right place for you.  We’re building that sign for members now, but again we missed the boat as the majority of the company has had their initial Engage experience. #3 Clean up, clean up, clean up Knowledge work and folksonomies are messy! The day we opened the doors to Engage I would have said we should keep everything ever created in Engage with an argument that it was a window into our collective knowledge so nothing should go.  Well, 6000+ groups and 200,000+ pieces of content later, I’ve changed my mind.  As previously mentioned, with too much “stuff” the system can be overwhelming to new members and it makes it harder to get what you’re looking for.   Do we need that help document about a tool we no longer have? NO!  Do we need that group that had 1 document and 2 discussions in the last two years? NO! Should we only have one group about a given topic instead of 4?  YES! Last fall, Engage defined a cleanup process for groups not used for a long time.  We also formed a volunteer cleaning army who are extra eyes on the hunt for “stuff” that should be updated, merged, or deleted.  It’s better late than never, but in line with what’s becoming a theme I wish these efforts had started earlier. #2 Communications & local community management One of the most important aspects of my job is to make sure people who should be talking to each other are actually doing it.  Connecting people to the other people they should know, the groups they should join, a piece of content that shouldn’t be missed.   I have worked both inside and outside of communications teams, and they are the best informed people in your company.  They know when something big is coming, how it impacts employees, how it fits with strategy, who else knows more, etc.  Having communications professionals who are power users can help scale up community management because they are already so well connected.  They also need to have the platform skills to pay attention without suffering email overload, how to grab someone’s attention, etc.  I wish I’d had figured this out much earlier.  If I had I would have groomed more communications colleagues into advocates and power members right at the start. #1 Grooming advocates vs. natural advocates I’ve just alluded to this above already. The very best advocates are those who naturally embrace your platform and automatically start to see new ways to work within it.  Those advocates seem to come out of the woodwork naturally since some of them are early adopters.  Not surprisingly, our best advocates today are those same people who were willing to come kick the tires when the community was completely empty.  Unfortunately, we didn’t get a global spread of those natural advocates.  I did ask around when we first launched for other people who might be good candidates, but didn’t push too hard as there were so many other things to get ready.  That was a mistake.  If I could get a redo I would have formally asked for people to be assigned where there were gaps and groomed them into an advocate.  Today as we find new advocates to fill the gaps, people are hesitant as the initial set has three years of practice are ahead of the curve power members; it definitely would have been easier earlier on. As fairly early adopters to corporate scale enterprise collaboration, there hasn’t been a roadmap to follow as we’ve grown Engage, which is part of the fun! It’s clear a lot of issues are more easily tackled the earlier you identify and begin to correct them, and I’ve identified the main five I wish I could redo.  In the spirit of collaboration, I hope someone else learns from my mistakes! View the original article by Jem here. 

    Read the article

  • CodePlex Daily Summary for Saturday, March 24, 2012

    CodePlex Daily Summary for Saturday, March 24, 2012Popular Releasesmenu4web: menu4web 0.0.3: menu4web 0.0.3Craig's Utility Library: Craig's Utility Library 3.1: This update adds about 60 new extension methods, a couple of new classes, and a number of fixes including: Additions Added DateSpan class Added GenericDelimited class Random additions Added static thread friendly version of Random.Next called ThreadSafeNext. AOP Manager additions Added Destroy function to AOPManager (clears out all data so system can be recreated. Really only useful for testing...) ORM additions Added PagedCommand and PageCount functions to ObjectBaseClass (same as M...SQL Monitor - managing sql server performance: SQLMon 4.2 alpha 14: 1. improved accuracy of logic fault checking in analysisMapWindow 6 Desktop GIS: MapWindow 6.1.1: MapWindow 6 Desktop GIS is an open source desktop GIS for Microsoft Windows that is built upon the DotSpatial Library. This release requires .Net 4 (Client Profile). Are you a software developer?Instead of downloading MapWindow for development purposes, get started with with the DotSpatial templateDotSpatial: DotSpatial 1.1: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components are available as NuGet pa...Indiefreaks Game Framework: 0.9.2.0: Feature: Added SunBurn engine v2.0.18.7 support (doesn't support versions below). Feature: Added GammaCorrection Post processor to allow developers or even players to tweak the Gamma of the game depending on their screen (courtesy of bamyazi) Feature: Added Windows, Xbox 360 & WP7 enabled StorageManager (based on Nick Gravelyn's EasyStorage) to read/write files for player or game data. Feature: Added VirtualGamePad feature for WP7 allowing developers to define Touch areas on screen and mapped...Code for Rapid C# Windows Development eBook + LINQPad and Data Tools: LLBLGen LINQPad Data Context Driver Version 2.1: Sixth release of a LLBLGen Pro Typed Data Context Driver for LINQPad. For LLBLGen Pro versions 3.1 and 3.5(coming). New features:When you switch the query language to SQL, LINQPad updates the Schema Explorer to show SQL column names rather than CLR property names Connection dialog unloads assemblies when it has closed down so they are no longer locked - this allows them to be rebuilt while LINQPad is still open Connection dialog includes a button to quickly add assemblies needed for the...People's Note: People's Note 0.40: Version 0.40 adds an option to compact the database from the profile screen. Compacting a database can make it smaller and faster by removing empty spaces left over by editing, moving, and deleting notes. To install: copy the appropriate CAB file onto your WM device and run it.Microsoft All-In-One Code Framework - a centralized code sample library: C++, .NET Coding Guideline: Microsoft All-In-One Code Framework Coding Guideline This document describes the coding style guideline for native C++ and .NET (C# and VB.NET) programming used by the Microsoft All-In-One Code Framework project team.WebDAV for WHS: Version 1.0.67: - Added: Check whether the Remote Web Access is turned on or not; - Added: Check for Add-In updates;Image 3D Viewer: Image 3D Viewer: WPF .Net 3.5 .Net 4 .Net 4.5Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (March 2012) for .NET 4.0: March release of Phalanger 3.0 significantly enhances performance, adds new features and fixes many issues. See following for the list of main improvements: New features: Phalanger Tools installable for Visual Studio 2011 Beta "filter" extension with several most used filters implemented DomDocument HTML parser, loadHTML() method mail() PHP compatible function PHP 5.4 T_CALLABLE token PHP 5.4 "callable" type hint PCRE: UTF32 characters in range support configuration supports <c...Nearforums - ASP.NET MVC forum engine: Nearforums v8.0: Version 8.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Internationalization Custom authentication provider Access control list for forums and threads Webdeploy package checksum: abc62990189cf0d488ef915d4a55e4b14169bc01 Visit Roadmap for more details.BIDS Helper: BIDS Helper 1.6: This beta release is the first to support SQL Server 2012 (in addition to SQL Server 2005, 2008, and 2008 R2). Since it is marked as a beta release, we are looking for bug reports in the next few months as you use BIDS Helper on real projects. In addition to getting all existing BIDS Helper functionality working appropriately in SQL Server 2012 (SSDT), the following features are new... Analysis Services Tabular Smart Diff Tabular Actions Editor Tabular HideMemberIf Tabular Pre-Build ...Json.NET: Json.NET 4.5 Release 1: New feature - Windows 8 Metro build New feature - JsonTextReader automatically reads ISO strings as dates New feature - Added DateFormatHandling to control whether dates are written in the MS format or ISO format, with ISO as the default New feature - Added DateTimeZoneHandling to control reading and writing DateTime time zone details New feature - Added async serialize/deserialize methods to JsonConvert New feature - Added Path to JsonReader/JsonWriter/ErrorContext and exceptions w...SCCM Client Actions Tool: SCCM Client Actions Tool v1.11: SCCM Client Actions Tool v1.11 is the latest version. It comes with following changes since last version: Fixed a bug when ping and cmd.exe kept running in endless loop after action progress was finished. Fixed update checking from Codeplex RSS feed. The tool is downloadable as a ZIP file that contains four files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA...WebSocket4Net: WebSocket4Net 0.5: Changes in this release fixed the wss's default port bug improved JsonWebSocket supported set client access policy protocol for silverlight fixed a handshake issue in Silverlight fixed a bug that "Host" field in handshake hadn't contained port if the port is not default supported passing in Origin parameter for handshaking supported reacting pings from server side fixed a bug in data sending fixed the bug sending a closing handshake with no message which would cause an excepti...SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.5: Changes included in this release: supported closing handshake queue checking improved JSON subprotocol supported sending ping from server to client fixed a bug about sending a closing handshake with no message refactored the code to improve protocol compatibility fixed a bug about sub protocol configuration loading in Mono improved BasicSubProtocol added JsonWebSocketSessionSurvey™ - web survey & form engine: Survey™ 2.0: The new stable Survey™ Project 2.0.0.1 version contains many new features like: Technical changes: - Use of Jquery, ASTreeview, Tabs, Tooltips and new menuprovider Features & Bugfixes: Survey list and search function Folder structure for surveys New Menustructure Library list New Library fields User list and search functions Layout options for a survey with CSS, page header and footer New IP filter security feature Enhanced Token Management New Question fields as ID, Alias...Speed up Printer migration using PrintBrm and it's configuration files: BRMC.EXE: Run the tool from the extracted directory of the printbrm backup. You can use the following command to extract a backup file to a directory - PRINTBRM.EXE -R -D C:\TEMP\EXPAND -F C:\TEMP\PRINTERBACKUP.PRINTEREXPORTNew ProjectsAsp.NET Url Router: 1.Url rewritting. 2.Provider regex matcher 3.Support custom url validate handler.BC-Web: ch projectCape: Dynamically generates Capistrano recipes for Rake tasks.cstgamebgs: Project for wp7GCalculator: GCalculator for performing basic arithmetic operations. Windows Sidebar Gadget invacc: Invacc- for inventory and Account Onlineirgsh-node: Worker nodes of BlankOn Package Factory - http://irgsh.blankonlinux.or.id/irgsh-repo: Repository manager node of BlankOn Package Factory - http://irgsh.blankonlinux.or.id/irgsh-web: Web interface and task manager of BlankOn Package Factory - http://irgsh.blankonlinux.or.id/Kinect Explorer For SharePoint 2010: Kinect Explorer for SharePoint is a tool which provide Natural User Interface to browse through SharePoint sites. Use body gestures to browse, read, move, copy documents. Use Speech services to read-out the files.MCU: mcu devMVC3ShellCode: MVC3ShellCode MVC3ShellCode MVC3ShellCode MVC3ShellCode MVC3ShellCode MVC3ShellCode NetWatch: NetWatch - network watchdog Small application primary designed for network connectivity monitoring. You can configure set of network tests (ping, http, ...) and time plan for this tests. Application is running in windows notification area and notife you each problem. NMortgage: The goal of this project will be to give a prospective home buyer or an existing home owner the insight they need to explore effects of different repayment strategies or different mortgage structures. Nucleo.NET MVP: The Nucleo MVP framework provides a Model-View-Presenter approach that isn't obtrusive, can be utilized in multiple environments, and is versatile. Providing a lot of features you see in other frameworks, the Nucleo MVP framework provides many extensibility points, pretty much allowing you to rewrite most of the framework. It features dynamic injection support, presenter and view initializers (like what you see in ASP.NET MVC), model property injection, attribute- and convention-based vie...P2PShare: This project is to build a new and moden System for p2p file shearing supporting downloads from HTTP, HTTPS, FTP support for P2Pshare client list servers so files can point to a server or a host only file so no servers are used and only p2p is usedPipeLayer: proyecto de sistemas inteligentespython-irgsh: Python library for BlankOn Package Factory - http://irgsh.blankonlinux.or.id/RamGec XNA Controls - Window Elements Library for XNA Solutions: Lightweight, ultra-high performance and flexible library for displaying and managing Window Controls for XNA system. Features its own Window Designer for creating custom windows and controls.RPG Character Generators and Tools: Various tools for pen and paper style role playing games.Screen scraper: A program that can be used to download public domain MP3 and other media such as pdf documents.SharePoint Bdc request library: The given set of classes simplifies an access to the external data, which can be reached through BDC. The library allows to make simple requests for values from external data source, using a BDC Entity Instance Identifier(s) or a value of a certain BDC Entity field. Developed to interact with Business Data Connectivity of SharePoint 2010.testtom03232012git01: testtom03232012git01testtom03232012git02: testtom03232012git02the north star uc: University projectTyphon: Typhon is a role playing simulation management application, much like Nova, but written in MVC/C#.VRE LabTrove-SharePoint connector: The VRE LabTrove-SharePoint Connector provides a means of integrating the ability to view, post to, and edit posts stored in a LabTrove electronic laboratory notebook from within the familiar environment of Microsoft SharePoint. Once installed and configured, these Web Parts give SharePoint users a straightforward way to interact with any LabTrove installation that they wish to use. They also facilitate users to attach data that is stored in a SharePoint Document Library to the LabTrove posts...

    Read the article

  • CodePlex Daily Summary for Monday, October 21, 2013

    CodePlex Daily Summary for Monday, October 21, 2013Popular ReleasesVirtual Wifi Hotspot for Windows 7 & 8: Virtual Router Plus 2.6.0: Virtual Router Plus 2.6.0Fast YouTube Downloader: Fast YouTube Downloader 2.3.0: Fast YouTube DownloaderABCat: ABCat v.2.0a: ?????? ????? ??? ????????? ????????????. ???????? ???, ?? ?? ????? ???? :) ?? ??????? ??????????? ???????? ?? Windows 7 - ????? ??????? ????????? ?? ???? ??? ?????? ??????.Magick.NET: Magick.NET 6.8.7.101: Magick.NET linked with ImageMagick 6.8.7.1. Breaking changes: - Renamed Matrix classes: MatrixColor = ColorMatrix and MatrixConvolve = ConvolveMatrix. - Renamed Depth method with Channels parameter to BitDepth and changed the other method into a property.VidCoder: 1.5.9 Beta: Added Rip DVD and Rip Blu-ray AutoPlay actions for Windows: now you can have VidCoder start up and scan a disc when you insert it. Go to Start -> AutoPlay to set it up. Added error message for Windows XP users rather than letting it crash. Removed "quality" preset from list for QSV as it currently doesn't offer much improvement. Changed installer to ignore version number when copying files over. Should reduce the chances of a bug from me forgetting to increment a version number. Fixed ...MSBuild Extension Pack: October 2013: Release Blog Post The MSBuild Extension Pack October 2013 release provides a collection of over 480 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...VG-Ripper & PG-Ripper: VG-Ripper 2.9.49: changes NEW: Added Support for "ImageTeam.org links NEW: Added Support for "ImgNext.com" links NEW: Added Support for "HostUrImage.com" links NEW: Added Support for "3XVintage.com" linksMedia Companion: Media Companion MC3.583b: As before release but fixed for no movie poster sourcesNew* Both - Added 'An' as option to ignore in title * Movie - Renaming - added %Z - Sorttitle to Legend * Movie - Renaming - added %O - Audio Channels to Legend * Movie - Remove a poster source from priority list. Reset List back to defaults. * Made Media Companion truly portable application. Fixed* Movie - browse for Poster Or Fanart, allows for jpg, tbn, png and bmp images * Movie - Alt Fanart Browser - Url or Browse window now fully...MoreTerra (Terraria World Viewer): MoreTerra 1.11.3.1: Release 1.11.3.1 ================ = New Features = ================ Added markers for Copper Cache, Silver Cache and the Enchanted Sword. ============= = Bug Fixes = ============= Use Official Colors now no longer tries to change the Draw Wires option instead. World reading was breaking for people with a stock 1.2 Terraria version. Changed world name reading so it does not crash the program if you load MoreTerra while Terraria is saving the world. =================== = Feature Removal = =...patterns & practices - Windows Azure Guidance: Cloud Design Patterns: 1st drop of Cloud Design Patterns project. It contains 14 patterns with 6 related guidance.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3): Includes all changes in v1.3 beta 1 and v1.3 beta 2 Support for Windows 8.1 RTM and VS2013 RTM Xaml: New property: AutoLoadPluginTypes to help control which stock plugins are loaded by default (requires AutoLoadPlugins = true). Support for SystemMediaTransportControls on Windows 8.1 JS: Support for visual markers in the timeline. JS: Support for markers collection and markerreached event. JS: New ChaptersPlugin to automatically populate timeline with chapter tracks. JS: Audio an...Json.NET: Json.NET 5.0 Release 8: Fix - Fixed not writing string quotes when QuoteName is falsePowerShell Community Extensions: 3.1 Production: PowerShell Community Extensions 3.1 Release NotesOct 17, 2013 This version of PSCX supports Windows PowerShell 3.0 and 4.0 See the ReleaseNotes.txt download above for more information.SQL Power Doc: Version 1.0.2.1: Misc. bug fixes Added logic to resolve members of a Windows Group server login Added columns to Excel workbooks to show definitions for server permissions, server roles, database permissions, and database rolesSocial Network Importer for NodeXL: SocialNetImporter(v.1.9): This new version includes: - Download latest status update and use it as vertex tooltip - Limit the timelines to parse to me, my friends or both - Fixed some reported bugs about the fan page and group importer - Fixed the login bug reported latelyTerrariViewer: TerrariViewer v7.1 [Terraria Inventory Editor]: You can now backspace in number fields Items added in 1.2.0.3 no longer corrupt player files Buff durations capped at 9999999 Item stacks capped at 9999999 Version info added Prefix IDs corrected Shoe and Eye color box are now properly clickable Moved Bank and Safe into their own tab Users will now be notified of new updatesPython Tools for Visual Studio: 2.0: PTVS 2.0 We’re pleased to announce the release of Python Tools for Visual Studio 2.0 RTM. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, IPython, and cross platform and cross language debugging support. QUICK VIDEO OVERVIEW For a quick overview of the general IDE experience, please watch this v...LINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...C++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeNew Projects4Elements game project: Project for a game were players will be able to earn money by just playingBible: This project aims to deliver an awesome Bible application for as many platforms as possible.cIFrex: cIFrexConnecting SharePoint with Exchange Server 2010/2013 using remote sessions: The project focuses on connecting SharePoint 2013 with Exchange Server 2010/2013 via Remote PowerShell with C# API Crzy Cms: Crzy CMS is a content management system created in ASP.NET using MVC/Entity Framework.DownloadsArchiver: Kann automatisch das Downloads-Verzeichnis aufräumen.GatePass: Gatepass SystemGenerate PowerShell scripts for SharePoint 2010 Site Collection Administration: Winform tool to generate PowerShell scripts, to move, create, backup and restore site collections It needs to be run on SharePoint Server 2010Generate PowerShell scripts for SharePoint 2013 Site Collection Administration: Winform tool to generate PowerShell scripts, to move, create, backup and restore site collections It needs to be run on SharePoint Server 2013GTerm: GTerm is alternative Terminal for Windows Vista/XP/7 and 8.Ignitron Chess: A newly created software product for playing chess by Ignitron Development Team.Introduccion a C# con VS2012: El curso de Introducción a C# con Visual Studio 2012, tiene como objetivo principal enseñar al estudiante los conceptos básicos de programación utilizando C#Punch Clock: Punch Clock is a online tool for any employee to sign up and calculate their daily,weekly monthly hours and download the report in PDF/Excel format. PVMonitor: This project will provide a monitoring dashboard for PV systems.qttexteditor: qttexteditorRegister Interest app: Register Interest is an app template where you can use to record interest of attendees, useful in marketing distribution.SharePoint Simple Developer: Project allows to simply SharePoint 2010 and SharePoint 2013 daily developers tasks. Main goal is to implement validation of built solutions. Silverlight QB Rating Calculator: Silverlight QB Rating Calculator is a Silverlight 5 App that calculates a QB's NFL and NCAA passing efficiency ratings.SimpleAdditionPage: This projects consists of a simple ASP.NET in VB.NET page that allows users to enter 2 numbers, and display their sum.SP SIN Store: SP SIN Store is an extension to SP SIN that allows users to install solutions in SharePoint similar to the WordPress plugins. Project is in Alpha stage.

    Read the article

  • CodePlex Daily Summary for Monday, August 18, 2014

    CodePlex Daily Summary for Monday, August 18, 2014Popular ReleasesMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...WallSwitch: WallSwitch 1.2.5: Version 1.2.5 Changes: Added support for sequential order in collage mode. Added option to display multiple images per switch in collage mode. Fixed bug where border width wasn't being loaded properly, and was reverting to default values. Fixed bug where sequential order was repeating images on multiple monitors. Decreased likelihood of random images being repeated.OpenCppCoverage: OpenCppCoverage 0.9.1: - Add Jenkins support. - Command line argument can be placed inside a config file. If you do not have Visual Studio C++ 2013 you need to download redistributable packages: http://www.microsoft.com/en-us/download/details.aspx?id=40784Easy Backup Windows Service: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Force run program as administrator by default. Add 'everyday' schedule element. Update solution to VS 2013.Easy Backup Application: Release 2.0 with CU: Fix log error when "To" directory not exist in fyle system. Fix app location initialization. Force run program as administrator by default. Update solution to VS 2013.TEBookConverter: 1.5: Added: Turkish and French translations Added: A few interface changes Removed: SkinDynamulet: Dynamulet v0.1: DynamoDB Transaction Server v0.1Console parallel nunit tests runner: ConsoleUnitTestsRunner 1.03: bugfixingFluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...DNN CMS Platform: 07.03.02: Major Highlights Fixed backwards compatibility issue with 3rd party control panels Fixed issue in the drag and drop functionality of the File Uploader in IE 11 and Safari Fixed issue where users were able to create pages with the same name Fixed issue that affected older versions of DNN that do not include the maxAllowedContentLength during upgrade Fixed issue that stopped some skins from being upgraded to newer versions Fixed issue that randomly showed an unexpected error during us...WordMat: WordMat for Mac: WordMat for Mac has a few limitations compared to the Windows version - Graph is not supported (Gnuplot, GeoGebra and Excel works) - Units are not supported yet (Coming up) The Mac version is yet as tested as the windows version.MFCMAPI: August 2014 Release: Build: 15.0.0.1042 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010/2013 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeEWSEditor: EwsEditor 1.10 Release: • Export and import of items as a full fidelity steam works - without proxy classes! - I used raw EWS POSTs. • Turned off word wrap for EWS request field in EWS POST windows. • Several windows with scrolling texts boxes were limiting content to 32k - I removed this restriction. • Split server timezone info off to separate menu item from the timezone info windows so that the timezone info window could be used without logging into a mailbox. • Lots of updates to the TimeZone window. • UserAgen...New Projectsballmon: ballmonExchange Database Recovery With and Without Log Files is Possible: This segments giving an overview of Exchange Server transaction log files. It describes process how users can recover their database with & without log filesFabs.Net: Ego tatmini ve gelisme amaçli yaptigim bir projedir.JacoChat: JacoChat is a simple chatting interface that uses my personal webserver as a "wall" for people to chat on.ManagedWin32: ManagedWin32 is a library that exposes the Win32 API to .NET applications.Open XML Extensions: The project provides additions to the Open XML SDK and related projects (e.g., PowerTools for Open XML), starting with MemoryStreams for Open XML Documents.orntic: Project for insurace companyTBOX: The Treasure Box Library: TBOX is a mutli-platform c library for unix, windows, mac, ios, android, etc. It includes asio, stream, container, algorithm, xml and other library modules.WeatherTS: Typescript weather application.?????@/????: ??????????????:????,????,????,???????,????????,??????:????????,?????! ?????????: ????????????????????,????????:??、??、???,?????????????????????! ????-??: ??????????????,????,???????????????。

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • Beyond Cloud Technology, Enabling A More Agile and Responsive Organization

    - by sxkumar
    This is the second part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain”. In the first part,  I was sharing with you how a broad-based transformation makes cloud more than a technology initiative, I will describe in this section how it requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. People: Most IT organizations have a fairly complex organizational structure. There are different groups, managing different pieces of the puzzle, and yet, they don't always work together. Provisioning a new application therefore may require a request to float endlessly through system administrators, DBAs and middleware admin worlds – resulting in long delays and constant finger pointing.  Cloud users expect end-to-end automation - which requires these silos to be greatly simplified, if not completely eliminated.  Most customers I talk to acknowledge this problem but are quick to admit that such a transformation is hard. As hard as it may be, I am afraid that the status quo is no longer an option. Sticking to an organizational structure that was created ages back will not only impede cloud adoption,  it also risks making the IT skills increasingly irrelevant in a world that is rapidly moving towards converged applications and infrastructure.   Process: Most IT organizations today operate with a mindset that they must fully "control" access to any and all types of IT services. This in turn leads to people clinging on to outdated manual approval processes .  While requiring approvals for scarce resources makes sense, insisting that every single request must be manually approved defeats the very purpose of cloud. Not only this causes delays, thereby at least partially negating the agility benefits, it also results in gross inefficiency. In a cloud environment, self-service access should be governed by policies, quotas that the administrators can define upfront . For a cloud initiative to be successful, IT organizations MUST be ready to empower users by giving them real control rather than insisting on brokering every single interaction between users and the cloud resources. Technology: From a technology perspective, cloud is about consolidation, standardization and automation. A consolidated and standardized infrastructure helps increase utilization and reduces cost. Additionally, it  enables a much higher degree of automation - thereby providing users the required agility while minimizing operational costs.  Obviously, automation is the key to cloud. Unfortunately it hasn’t received as much attention within enterprises as it should have.  Many organizations are just now waking up to the criticality of automation and it still often gets relegated to back burner in favor of other "high priority" projects. However, it is important to understand that without the right type and level of automation, cloud will remain a distant dream for most enterprises. This in turn makes the choice of the cloud management software extremely critical.  For a cloud management software to be effective in an enterprise environment, it must meet the following qualifications: Broad and Deep Solution It should offer a broad and deep solution to enable the kind of broad-based transformation we are talking about.  Its footprint must cover physical and virtual systems, as well as infrastructure, database and application tiers. Too many enterprises choose to equate cloud with virtualization. While virtualization is a critical component of a cloud solution, it is just a component and not the whole solution. Similarly, too many people tend to equate cloud with Infrastructure-as-a-Service (IaaS). While it is perfectly reasonable to treat IaaS as a starting point, it is important to realize that it is just the first stepping stone - and on its own it can only provide limited business benefits. It is actually the higher level services, such as (application) platform and business applications, that will bring about a more meaningful transformation to your enterprise. Run and Manage Efficiently Your Mission Critical Applications It should not only be able to run your mission critical applications, it should do so better than before.  For enterprises, applications and data are the critical business assets  As such, if you are building a cloud platform that cannot run your ERP application, it isn't truly a "enterprise cloud".  Also, be wary of  vendors who try to sell you the idea that your applications must be written in a certain way to be able to run on the cloud. That is nothing but a bogus, self-serving argument. For the cloud to be meaningful to enterprises, it should adopt to your applications - and not the other way around.  Automated, Integrated Set of Cloud Management Capabilities At the root of many of the problems plaguing enterprise IT today is complexity. A complex maze of tools and technology, coupled with archaic  processes, results in an environment which is inflexible, inefficient and simply too hard to manage. Management tool consolidation, therefore, is key to the success of your cloud as tool proliferation adds to complexity, encourages compartmentalization and defeats the very purpose that you are building the cloud for. Decision makers ought to be extra cautious about vendors trying to sell them a "suite" of disparate and loosely integrated products as a cloud solution.  An effective enterprise cloud management solution needs to provide a tightly integrated set of capabilities for all aspects of cloud lifecycle management. A simple question to ask: will your environment be more or less complex after you implement your cloud? More often than not, the answer will surprise you.  At Oracle, we have understood these challenges and have been working hard to create cloud solutions that are relevant and meaningful for enterprises.  And we have been doing it for much longer than you may think. Oracle was one of the very first enterprise software companies to make our products available on the Amazon Cloud. As far back as in 2007, we created new cloud solutions such as Cloud Database Backup that are helping customers like Amazon save millions every year.  Our cloud solution portfolio is also the broadest and most deep in the industry  - covering public, private, hybrid, Infrastructure, platform and applications clouds. It is no coincidence therefore that the Oracle Cloud today offers the most comprehensive set of public cloud services in the industry.  And to a large part, this has been made possible thanks to our years on investment in creating cloud enabling technologies. I will dedicated the third and final part of the blog “Clouds, Clouds Everywhere But not a Drop of Rain” to Oracle Cloud Technologies Building Blocks and how they mapped into our vision of Enterprise Cloud. Stay Tuned.

    Read the article

  • jqgrid with asp.net webmethod and json working with sorting, paging, searching and LINQ

    - by aimlessWonderer
    THIS WORKS! Most topics covering jqgrid and asp.net seem to relate to just receiving JSON, or working in the MVC framework, or utilizing other handlers or web services... but not many dealt with actually passing parameters back to an actual webmethod in the codebehind. Furthermore, scarce are the examples that contain successful implementation the AJAX paging, sorting, or searching along with LINQ to SQL for asp.net jqGrid. Below is a working example that may help others who need help to pass parameters to jqGrid in order to have correct paging, sorting, filtering.. it uses pieces from here and there... ================================================== First, THE JAVASCRIPT <script type="text/javascript"> $(document).ready(function() { var grid = $("#list"); $("#list").jqGrid({ // setup custom parameter names to pass to server prmNames: { search: "isSearch", nd: null, rows: "numRows", page: "page", sort: "sortField", order: "sortOrder" }, // add by default to avoid webmethod parameter conflicts postData: { searchString: '', searchField: '', searchOper: '' }, // setup ajax call to webmethod datatype: function(postdata) { mtype: "GET", $.ajax({ url: 'PageName.aspx/getGridData', type: "POST", contentType: "application/json; charset=utf-8", data: JSON.stringify(postdata), dataType: "json", success: function(data, st) { if (st == "success") { var grid = jQuery("#list")[0]; grid.addJSONData(JSON.parse(data.d)); } }, error: function() { alert("Error with AJAX callback"); } }); }, // this is what jqGrid is looking for in json callback jsonReader: { root: "rows", page: "page", total: "totalpages", records: "totalrecords", cell: "cell", id: "id", //index of the column with the PK in it userdata: "userdata", repeatitems: true }, colNames: ['Id', 'First Name', 'Last Name'], colModel: [ { name: 'id', index: 'id', width: 55, search: false }, { name: 'fname', index: 'fname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} }, { name: 'lname', index: 'lname', width: 200, searchoptions: { sopt: ['eq', 'ne', 'cn']} } ], rowNum: 10, rowList: [10, 20, 30], pager: jQuery("#pager"), sortname: "fname", sortorder: "asc", viewrecords: true, caption: "Grid Title Here" }).jqGrid('navGrid', '#pager', { edit: false, add: false, del: false }, {}, // default settings for edit {}, // add {}, // delete { closeOnEscape: true, closeAfterSearch: true}, //search {} ) }); </script> ================================================== Second, THE C# WEBMETHOD [WebMethod] public static string getGridData(int? numRows, int? page, string sortField, string sortOrder, bool isSearch, string searchField, string searchString, string searchOper) { string result = null; MyDataContext db = null; try { //--- retrieve the data db = new MyDataContext("my connection string path"); var query = from u in db.TBL_USERs select u; //--- determine if this is a search filter if (isSearch) { searchOper = getOperator(searchOper); // need to associate correct operator to value sent from jqGrid string whereClause = String.Format("{0} {1} {2}", searchField, searchOper, "@" + searchField); //--- associate value to field parameter Dictionary<string, object> param = new Dictionary<string, object>(); param.Add("@" + searchField, searchString); query = query.Where(whereClause, new object[1] { param }); } //--- setup calculations int pageIndex = page ?? 1; //--- current page int pageSize = numRows ?? 10; //--- number of rows to show per page int totalRecords = query.Count(); //--- number of total items from query int totalPages = (int)Math.Ceiling((decimal)totalRecords / (decimal)pageSize); //--- number of pages //--- filter dataset for paging and sorting IQueryable<TBL_USER> orderedRecords = query.OrderBy(sortfield); IEnumerable<TBL_USER> sortedRecords = orderedRecords.ToList(); if (sortorder == "desc") sortedRecords= sortedRecords.Reverse(); sortedRecords= sortedRecords .Skip((pageIndex - 1) * pageSize) //--- page the data .Take(pageSize); //--- format json var jsonData = new { totalpages = totalPages, //--- number of pages page = pageIndex, //--- current page totalrecords = totalRecords, //--- total items rows = ( from row in sortedRecords select new { i = row.USER_ID, cell = new string[] { row.USER_ID.ToString(), row.FNAME.ToString(), row.LNAME } } ).ToArray() }; result = Newtonsoft.Json.JsonConvert.SerializeObject(jsonData); } catch (Exception ex) { Debug.WriteLine(ex); } finally { if (db != null) db.Dispose(); } return result; } ================================================== Third, NECESSITIES In order to have dynamic OrderBy clauses in the LINQ, I had to pull in a class to my AppCode folder called 'Dynamic.cs'. You can retrieve the file from downloading here. You will find the file in the "DynamicQuery" folder. That file will give you the ability to utilized dynamic ORDERBY clause since we don't know what column we're filtering by except on the initial load. To serialize the JSON back from the C-sharp to the JS, I incorporated the James Newton-King JSON.net DLL found here : http://json.codeplex.com/releases/view/37810. After downloading, there is a "Newtonsoft.Json.Compact.dll" which you can add in your Bin folder as a reference Here's my USING's block using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Web.UI.WebControls; using System.Web.Services; using System.Linq.Dynamic; For the Javascript references, I'm using the following scripts in respective order in case that helps some folks: 1) jquery-1.3.2.min.js ... 2) jquery-ui-1.7.2.custom.min.js ... 3) json.min.js ... 4) i18n/grid.locale-en.js ... 5) jquery.jqGrid.min.js For the CSS, I'm using jqGrid's necessities as well as the jQuery UI Theme: 1) jquery_theme/jquery-ui-1.7.2.custom.css ... 2) ui.jqgrid.css The key to getting the parameters from the JS to the WebMethod without having to parse an unserialized string on the backend or having to setup some JS logic to switch methods for different numbers of parameters was this block postData: { searchString: '', searchField: '', searchOper: '' }, Those parameters will still be set correctly when you actually do a search and then reset to empty when you "reset" or want the grid to not do any filtering Hope this helps some others!!!! Please reply if you find major issues or ways of refactoring or doing better that I haven't considered.

    Read the article

  • Issues using SoapUI with Eclipse

    - by Epitaph
    I get the following in the Error log when using the latest SoapUI 3.5. plugin in Eclipse Galileo on Ubuntu. Also, this seems to continually freeze Eclipse. I have tried a complete uninstall and restarting Eclipse with -clean argument to no avail. Has anyone encountered the same issue? Is there anything I can do to fix this? Could not acquire children from extension: com.eviware.soapui.eclipse.projectContent for Plugin org.eclipse.ui.navigator java.lang.NullPointerException at org.eclipse.ui.internal.navigator.NavigatorContentService.rememberContribution(NavigatorContentService.java:686) at org.eclipse.ui.internal.navigator.extensions.SafeDelegateTreeContentProvider.getElements(SafeDelegateTreeContentProvider.java:97) at org.eclipse.ui.internal.navigator.NavigatorContentServiceContentProvider.getElements(NavigatorContentServiceContentProvider.java:156) at org.eclipse.jface.viewers.StructuredViewer.getRawChildren(StructuredViewer.java:959) at org.eclipse.jface.viewers.ColumnViewer.getRawChildren(ColumnViewer.java:703) at org.eclipse.jface.viewers.AbstractTreeViewer.getRawChildren(AbstractTreeViewer.java:1330) at org.eclipse.jface.viewers.TreeViewer.getRawChildren(TreeViewer.java:390) at org.eclipse.jface.viewers.AbstractTreeViewer.getFilteredChildren(AbstractTreeViewer.java:636) at org.eclipse.jface.viewers.AbstractTreeViewer.getSortedChildren(AbstractTreeViewer.java:602) at org.eclipse.jface.viewers.AbstractTreeViewer.updateChildren(AbstractTreeViewer.java:2578) at org.eclipse.jface.viewers.AbstractTreeViewer.internalRefreshStruct(AbstractTreeViewer.java:1863) at org.eclipse.jface.viewers.TreeViewer.internalRefreshStruct(TreeViewer.java:716) at org.eclipse.jface.viewers.AbstractTreeViewer.internalRefresh(AbstractTreeViewer.java:1838) at org.eclipse.jface.viewers.AbstractTreeViewer.internalRefresh(AbstractTreeViewer.java:1794) at org.eclipse.ui.navigator.CommonViewer.internalRefresh(CommonViewer.java:566) at org.eclipse.jface.viewers.StructuredViewer$8.run(StructuredViewer.java:1484) at org.eclipse.jface.viewers.StructuredViewer.preservingSelection(StructuredViewer.java:1392) at org.eclipse.jface.viewers.TreeViewer.preservingSelection(TreeViewer.java:402) at org.eclipse.jface.viewers.StructuredViewer.preservingSelection(StructuredViewer.java:1353) at org.eclipse.jface.viewers.StructuredViewer.refresh(StructuredViewer.java:1482) at org.eclipse.jface.viewers.ColumnViewer.refresh(ColumnViewer.java:548) at org.eclipse.ui.navigator.CommonViewer.refresh(CommonViewer.java:358) at org.eclipse.ui.navigator.CommonViewer.refresh(CommonViewer.java:515) at org.eclipse.jface.viewers.StructuredViewer.refresh(StructuredViewer.java:1414) at org.eclipse.jface.viewers.StructuredViewer.addFilter(StructuredViewer.java:582) at org.eclipse.ui.internal.navigator.resources.actions.WorkingSetActionProvider.setWorkingSet(WorkingSetActionProvider.java:280) at org.eclipse.ui.internal.navigator.resources.actions.WorkingSetActionProvider$3.propertyChange(WorkingSetActionProvider.java:226) at org.eclipse.ui.internal.navigator.extensions.ExtensionStateModel.firePropertyChangeEvent(ExtensionStateModel.java:135) at org.eclipse.ui.internal.navigator.extensions.ExtensionStateModel.setBooleanProperty(ExtensionStateModel.java:90) at org.eclipse.ui.internal.navigator.resources.actions.WorkingSetActionProvider.restoreState(WorkingSetActionProvider.java:318) at org.eclipse.ui.navigator.NavigatorActionService.initialize(NavigatorActionService.java:372) at org.eclipse.ui.navigator.NavigatorActionService.getActionProviderInstance(NavigatorActionService.java:355) at org.eclipse.ui.navigator.NavigatorActionService.fillActionBars(NavigatorActionService.java:253) at org.eclipse.ui.navigator.CommonNavigatorManager.selectionChanged(CommonNavigatorManager.java:239) at org.eclipse.jface.viewers.Viewer$2.run(Viewer.java:162) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.runtime.Platform.run(Platform.java:888) at org.eclipse.ui.internal.JFaceUtil$1.run(JFaceUtil.java:48) at org.eclipse.jface.util.SafeRunnable.run(SafeRunnable.java:175) at org.eclipse.jface.viewers.Viewer.fireSelectionChanged(Viewer.java:160) at org.eclipse.jface.viewers.StructuredViewer.updateSelection(StructuredViewer.java:2132) at org.eclipse.jface.viewers.StructuredViewer.setSelection(StructuredViewer.java:1669) at org.eclipse.jface.viewers.TreeViewer.setSelection(TreeViewer.java:1124) at org.eclipse.ui.navigator.CommonViewer.setSelection(CommonViewer.java:380) at org.eclipse.ui.navigator.CommonNavigator.selectReveal(CommonNavigator.java:362) at org.eclipse.ui.internal.navigator.actions.LinkEditorAction$3.run(LinkEditorAction.java:101) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.navigator.actions.LinkEditorAction$2.runInUIThread(LinkEditorAction.java:89) at org.eclipse.ui.progress.UIJob$1.run(UIJob.java:95) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:134) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:3468) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3115) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311)

    Read the article

  • Why does WPF Style to show validation errors in ToolTip work for a TextBox but fails for a ComboBox?

    - by Mike B
    I am using a typical Style to display validation errors as a tooltip from IErrorDataInfo for a textbox as shown below and it works fine. <Style TargetType="{x:Type TextBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> But when i try to do the same thing for a ComboBox like this it fails <Style TargetType="{x:Type ComboBox}"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}"/> </Trigger> </Style.Triggers> </Style> The error I get in the output window is: System.Windows.Data Error: 17 : Cannot get 'Item[]' value (type 'ValidationError') from '(Validation.Errors)' (type 'ReadOnlyObservableCollection`1'). BindingExpression:Path=(0)[0].ErrorContent; DataItem='ComboBox' (Name='ownerComboBox'); target element is 'ComboBox' (Name='ownerComboBox'); target property is 'ToolTip' (type 'Object') ArgumentOutOfRangeException:'System.ArgumentOutOfRangeException: Specified argument was out of the range of valid values.Parameter name: index' Oddly it also attempts to make invalid Database changes when I close the window if I change any ComboBox values (This is also when the binding error occurs)!!! Cannot insert the value NULL into column 'EmpFirstName', table 'OITaskManager.dbo.Employees'; column does not allow nulls. INSERT fails. The statement has been terminated. Simply by commenting the style out everyting works perfectly. How do I fix this? Just in case anyone needs it one of the comboBox' xaml follows: <ComboBox ItemsSource="{Binding Path=Employees}" SelectedValuePath="EmpID" SelectedValue="{Binding Path=SelectedIssue.Employee2.EmpID, Mode=OneWay, ValidatesOnDataErrors=True}" ItemTemplate="{StaticResource LastNameFirstComboBoxTemplate}" Height="28" Name="ownerComboBox" Width="120" Margin="2" SelectionChanged="ownerComboBox_SelectionChanged" /> <DataTemplate x:Key="LastNameFirstComboBoxTemplate"> <TextBlock> <TextBlock.Text> <MultiBinding StringFormat="{}{1}, {0}" > <Binding Path="EmpFirstName" /> <Binding Path="EmpLastName" /> </MultiBinding> </TextBlock.Text> </TextBlock> </DataTemplate> SelectionChanged: (I do plan to implement commanding before long but, as this is my first WPF project I have not gone full MVVM yet. I am trying to take things in small-medium sized bites) // This is done this way to maintain the DataContext Integrity // and avoid an error due to an Object being "Not New" in Linq-to-SQL private void ownerComboBox_SelectionChanged(object sender, SelectionChangedEventArgs e) { Employee currentEmpl = ownerComboBox.SelectedItem as Employee; if (currentEmpl != null && currentEmpl != statusBoardViewModel.SelectedIssue.Employee2) { statusBoardViewModel.SelectedIssue.Employee2 = currentEmpl; } }

    Read the article

  • JAX-WS MarshalException with custom JAX-B bindings: Unable to marshal type "java.lang.String" as an

    - by MoneyMark
    I seem to be having an issue with Jax-WS and Jax-b playing nicely together. I need to consume a web-service, which has a predefined WSDL. When executing the generated client I am receiving the following error: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [com.sun.istack.SAXException2: unable to marshal type "java.lang.String" as an element because it is missing an @XmlRootElement annotation] This started occurring when I used an external custom binding file to map needlessly complex types to java.lang.string. Here is an excerpt from my binding file: <?xml version="1.0" encoding="UTF-8"?> <bindings xmlns="http://java.sun.com/xml/ns/jaxb" version="2.0" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:xjc="http://java.sun.com/xml/ns/jaxb/xjc"> <bindings schemaLocation="http://localhost:7777/GESOR/services/RegistryUpdatePort?wsdl#types?schema1" node="/xs:schema"> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='company_name']"> <property> <baseType name="java.lang.String" /> </property> </bindings> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='address1']"> <property> <baseType name="java.lang.String" /> </property> </bindings> <bindings node="//xs:element[@name='StwrdCompany']//xs:complexType//xs:sequence//xs:element[@name='address2']"> <property> <baseType name="java.lang.String" /> </property> </bindings> ...more fields </bindings> </bindings> When executing wsimport against the provided WSDL, StwrdCompany is generated with the following variables declared: @XmlRootElement(name = "StwrdCompany") public class StwrdCompany { @XmlElementRef(name = "company_name", type = JAXBElement.class) protected String companyName; @XmlElementRef(name = "address1", type = JAXBElement.class) protected String address1; @XmlElementRef(name = "address2", type = JAXBElement.class) ... more fields ... getters/setters @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "value" }) public static class CompanyName { @XmlValue protected String value; @XmlAttribute protected Boolean updateToNULL; /** * Gets the value of the value property. * * @return * possible object is * {@link String } * */ public String getValue() { return value; } /** * Sets the value of the value property. * * @param value * allowed object is * {@link String } * */ public void setValue(String value) { this.value = value; } /** * Gets the value of the updateToNULL property. * * @return * possible object is * {@link Boolean } * */ public boolean isUpdateToNULL() { if (updateToNULL == null) { return false; } else { return updateToNULL; } } /** * Sets the value of the updateToNULL property. * * @param value * allowed object is * {@link Boolean } * */ public void setUpdateToNULL(Boolean value) { this.updateToNULL = value; } ... more inner classes } } Finally, here is the associated snippet from the WSDL that seems to be causing such grief. <xs:element name="StwrdCompany"> <xs:complexType> <xs:sequence> <xs:element maxOccurs="1" minOccurs="0" name="company_name" nillable="true"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute default="false" name="updateToNULL" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> <xs:element maxOccurs="1" minOccurs="0" name="address1" nillable="true"> <xs:complexType> <xs:simpleContent> <xs:extension base="xs:string"> <xs:attribute default="false" name="updateToNULL" type="xs:boolean"/> </xs:extension> </xs:simpleContent> </xs:complexType> </xs:element> ... more fields in the same format <xs:element maxOccurs="1" minOccurs="0" name="p_source_timestamp" nillable="false" type="xs:string"/> </xs:sequence> <xs:attribute name="company_xid" type="xs:string"/> </xs:complexType> </xs:element> The reason for the custom binding is so I can map user input from a pojo into the StwrdCompany object more easily, whether it be direct instantiation or through the use of Dozer for bean mapping. I was unable to successfully map between the objects without the custom binding. Finally, one other thing I tried was setting a globalBinding definition: <globalBindings generateValueClass="false"></globalBindings> This caused the server to through an argument mismatch exception since the Soap Message was using xs:string xml types instead of passing the defined complex types, so I abandoned that idea. Any insight into what is causing the MarshalException or how to go about solving the issue of calling the webservice and mapping these objects more easily, is greatly appreciated. I've been searching for days and I sadly think I am stumped.

    Read the article

  • Using windows command line from Pascal

    - by Jordan
    I'm trying to use some windows command line tools from within a short Pascal program. To make it easier, I'm writing a function called DoShell which takes a command line string as an argument and returns a record type called ShellResult, with one field for the process's exitcode and one field for the process's output text. I'm having major problems with some standard library functions not working as expected. The DOS Exec() function is not actually carrying out the command i pass to it. The Reset() procedure gives me a runtime error RunError(2) unless i set the compiler mode {I-}. In that case i get no runtime error, but the Readln() functions that i use on that file afterwards don't actually read anything, and furthermore the Writeln() functions used after that point in the code execution do nothing as well. Here's the source code of my program so far. I'm using Lazarus 0.9.28.2 beta, with Free Pascal Compiler 2.24 program project1; {$mode objfpc}{$H+} uses Classes, SysUtils, StrUtils, Dos { you can add units after this }; {$IFDEF WINDOWS}{$R project1.rc}{$ENDIF} type ShellResult = record output : AnsiString; exitcode : Integer; end; function DoShell(command: AnsiString): ShellResult; var exitcode: Integer; output: AnsiString; exepath: AnsiString; exeargs: AnsiString; splitat: Integer; F: Text; readbuffer: AnsiString; begin //Initialize variables exitcode := 0; output := ''; exepath := ''; exeargs := ''; splitat := 0; readbuffer := ''; Result.exitcode := 0; Result.output := ''; //Split command for processing splitat := NPos(' ', command, 1); exepath := Copy(command, 1, Pred(splitat)); exeargs := Copy(command, Succ(splitat), Length(command)); //Run command and put output in temporary file Exec(FExpand(exepath), exeargs + ' __output'); exitcode := DosExitCode(); //Get output from file Assign(F, '__output'); Reset(F); Repeat Readln(F, readbuffer); output := output + readbuffer; readbuffer := ''; Until Eof(F); //Set Result Result.exitcode := exitcode; Result.output := output; end; var I : AnsiString; R : ShellResult; begin Writeln('Enter a command line to run.'); Readln(I); R := DoShell(I); Writeln('Command Exit Code:'); Writeln(R.exitcode); Writeln('Command Output:'); Writeln(R.output); end.

    Read the article

  • System.Reflection and InvokeMember, storing type, assembly, and object in a class

    - by Cyclone
    I have tested code to call a method inside of a compiled DLL, and it works. I created a class to store the object, type, and loaded assembly, but something is lost in translation because it is unable to find the member I wish to invoke. If I create a type, object, and assembly, and properly load everything into these and perform InvokeMember on the type, it works just fine. However, when I use the things inside of my class, it throws a MissingMemberException and does not invoke the member, obviously. What am I doing wrong? The member in question is a subroutine which takes one argument, a string. This is quite frustrating. Code being called: Dim MyLoadedAssembly As New LoadedAssembly() MyLoadedAssembly.MyAssembly = Assembly.LoadFrom("display.dll") MyLoadedAssembly.MyObject = MyLoadedAssembly.MyAssembly.CreateInstance("Display.UI.Window") MyLoadedAssembly.MyType = MyLoadedAssembly.MyAssembly.GetType("Display.UI.Window") Dim args() As Object = {"test"} MyLoadedAssembly.InvokeMember("Show", args) Private Class LoadedAssembly Public MyType As Type Public MyObject As Object Public MyAssembly As Assembly Public Function InvokeMember(ByVal name As String, ByVal args() As Object) Return MyType.InvokeMember(name, BindingFlags.Default Or BindingFlags.InvokeMethod Or BindingFlags.GetProperty Or BindingFlags.Instance, Nothing, MyObject, args) End Function End Class Code inside of display.dll: Namespace UI Public Class Window Private wind As New System.Windows.Forms.Form Public FullScreen As Boolean = False Public Overloads Sub Show(ByVal text As String) wind.Show() wind.Text = text End Sub Public Overloads Sub Show() wind.Show() End Sub End Class End Namespace The root namespace for display.dll is Display. Why is my code only working when not within this class? System.MissingMethodException was unhandled Message="Method 'Display.UI.Window.Show' not found." Source="mscorlib" StackTrace: at System.RuntimeType.InvokeMember(String name, BindingFlags bindingFlags, Binder binder, Object target, Object[] providedArgs, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParams) at System.Type.InvokeMember(String name, BindingFlags invokeAttr, Binder binder, Object target, Object[] args) at IDE.IDE.LoadedAssembly.InvokeMember(String name, Object[] args) in C:\Documents and Settings\Davey\Desktop\RaptorScript\RaptorScript\RaptorScript\IDE.vb:line 69 at IDE.IDE.IDE_Load(Object sender, EventArgs e) in C:\Documents and Settings\Davey\Desktop\RaptorScript\RaptorScript\RaptorScript\IDE.vb:line 50 at System.EventHandler.Invoke(Object sender, EventArgs e) at System.Windows.Forms.Form.OnLoad(EventArgs e) at System.Windows.Forms.Form.OnCreateControl() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ContainerControl.WndProc(Message& m) at System.Windows.Forms.Form.WmShowWindow(Message& m) at System.Windows.Forms.Form.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.SafeNativeMethods.ShowWindow(HandleRef hWnd, Int32 nCmdShow) at System.Windows.Forms.Control.SetVisibleCore(Boolean value) at System.Windows.Forms.Form.SetVisibleCore(Boolean value) at System.Windows.Forms.Control.set_Visible(Boolean value) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(ApplicationContext context) at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine) at IDE.My.MyApplication.Main(String[] Args) in 17d14f5c-a337-4978-8281-53493378c1071.vb:line 81 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • Flex actionscript extending DateChooser, events in calendar

    - by Nemi
    ExtendedDateChooser class is great solution for simple event calendar used in my flex project. You can find it if google for "Adding-Calendar-Event-Entries-to-the-Flex-DateChooser-Component" with a link of updated solution in comments of the post. I posted files below. Problem in that calendar is text events are missing when month is changed. Is there updateCompleted event in Actionscript just like in dateChooser flex component? Like in: <mx:DateChooser id="dc" updateCompleted="goThroughDateChooserCalendarLayoutAndSetEventsInCalendarAgain()"</mx> When scroll event is added, which is available in Actionscript, it gets dispatched but after updateDisplayList() is fired, so didn't manage to answer, why are calendar events erased? Any suggestions, what to add in code, maybe override some function? ExtendedDateChooserClass.mxml <?xml version='1.0' encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:mycomp="cyberslingers.controls.*" layout="absolute" creationComplete="init()"> <mx:Script> <![CDATA[ import cyberslingers.controls.ExtendedDateChooser; import mx.rpc.events.ResultEvent; import mx.rpc.events.FaultEvent; import mx.controls.Alert; public var mycal:ExtendedDateChooser = new ExtendedDateChooser(); // collection to hold date, data and label [Bindable] public var dateCollection:XMLList = new XMLList(); private function init():void { eventList.send(); } private function readCollection(event:ResultEvent):void { dateCollection = event.result.calendarevent; //Position and size the calendar mycal.width = 400; mycal.height = 400; //Add the data from the XML file to the calendar mycal.dateCollection = dateCollection; //Add the calendar to the canvas this.addChild(mycal); } private function readFaultHandler(event:FaultEvent):void { Alert.show(event.fault.message, "Could not load data"); } ]]> </mx:Script> <mx:HTTPService id="eventList" url="data.xml" resultFormat="e4x" result="readCollection(event);" fault="readFaultHandler(event);"/> </mx:Application> ExtendedDateChooser.as package cyberslingers.controls { import flash.events.Event; import flash.events.TextEvent; import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.controls.CalendarLayout; import mx.controls.DateChooser; import mx.core.UITextField; import mx.events.FlexEvent; public class ExtendedDateChooser extends DateChooser { public function ExtendedDateChooser() { super(); this.addEventListener(TextEvent.LINK, linkHandler); this.addEventListener(FlexEvent.CREATION_COMPLETE, addEvents); } //datasource public var dateCollection:XMLList = new XMLList(); //-------------------------------------- // Add events //-------------------------------------- /** * Loop through calendar control and add event links * @param e */ private function addEvents(e:Event):void { // loop through all the calendar children for(var i:uint = 0; i < this.numChildren; i++) { var calendarObj:Object = this.getChildAt(i); // find the CalendarLayout object if(calendarObj.hasOwnProperty("className")) { if(calendarObj.className == "CalendarLayout") { var cal:CalendarLayout = CalendarLayout(calendarObj); // loop through all the CalendarLayout children for(var j:uint = 0; j < cal.numChildren; j++) { var dateLabel:Object = cal.getChildAt(j); // find all UITextFields if(dateLabel.hasOwnProperty("text")) { var day:UITextField = UITextField(dateLabel); var dayHTML:String = day.text; day.selectable = true; day.wordWrap = true; day.multiline = true; day.styleName = "EventLabel"; //TODO: passing date as string is not ideal, tough to validate //Make sure to add one to month since it is zero based var eventArray:Array = dateHelper((this.displayedMonth+1) + "/" + dateLabel.text + "/" + this.displayedYear); if(eventArray.length > 0) { for(var k:uint = 0; k < eventArray.length; k++) { dayHTML += "<br><A HREF='event:" + eventArray[k].data + "' TARGET=''>" + eventArray[k].label + "</A>"; } day.htmlText = dayHTML; } } } } } } } //-------------------------------------- // Events //-------------------------------------- /** * Handle clicking text link * @param e */ private function linkHandler(event:TextEvent):void { // What do we want to do when user clicks an entry? Alert.show("selected: " + event.text); } //-------------------------------------- // Helpers //-------------------------------------- /** * Build array of events for current date * @param string - current date * */ private function dateHelper(renderedDate:String):Array { var result:Array = new Array(); for(var i:uint = 0; i < dateCollection.length(); i++) { if(dateCollection[i].date == renderedDate) { result.push(dateCollection[i]); } } return result; } } } data.xml <?xml version="1.0" encoding="utf-8"?> <rss> <calendarevent> <date>8/22/2009</date> <data>This is a test 1</data> <label>Stephens Test 1</label> </calendarevent> <calendarevent> <date>8/23/2009</date> <data>This is a test 2</data> <label>Stephens Test 2</label> </calendarevent> </rss>

    Read the article

  • RegLoadAppKey working fine on 32-bit OS, failing on 64-bit OS, even if both processes are 32-bit

    - by James Manning
    I'm using .NET 4 and the new RegistryKey.FromHandle call so I can take the hKey I get from opening a registry file with RegLoadAppKey and operate on it with the existing managed API. I thought at first it was just a matter of a busted DllImport and my call had an invalid type in the params or a missing MarshalAs or whatever, but looking at other registry functions and their DllImport declarations (for instance, on pinvoke.net), I don't see what else to try (I've had hKey returned as both int and IntPtr, both worked on 32-bit OS and fail on 64-bit OS) I've got it down to as simple a repro case as I can - it just tries to create a 'random' subkey then write a value to it. It works fine on my Win7 x86 box and fails on Win7 x64 and 2008 R2 x64, even when it's still a 32-bit process, even run from a 32-bit cmd prompt. EDIT: It also fails in the same way if it's a 64-bit process. on Win7 x86: INFO: Running as Admin in 32-bit process on 32-bit OS Was able to create Microsoft\Windows\CurrentVersion\RunOnceEx\a95b1bbf-7a04-4707-bcca-6aee6afbfab7 and write a value under it on Win7 x64, as 32-bit: INFO: Running as Admin in 32-bit process on 64-bit OS Unhandled Exception: System.UnauthorizedAccessException: Access to the registry key '\Microsoft\Windows\CurrentVersion\RunOnceEx\ce6d5ff6-c3af-47f7-b3dc-c5a1b9a3cd22' is denied. at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str) at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions) at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey) at LoadAppKeyAndModify.Program.Main(String[] args) on Win7 x64, as 64-bit: INFO: Running as Admin in 64-bit process on 64-bit OS Unhandled Exception: System.UnauthorizedAccessException: Access to the registry key '\Microsoft\Windows\CurrentVersion\RunOnceEx\43bc857d-7d07-499c-8070-574d6732c130' is denied. at Microsoft.Win32.RegistryKey.Win32Error(Int32 errorCode, String str) at Microsoft.Win32.RegistryKey.CreateSubKeyInternal(String subkey, RegistryKeyPermissionCheck permissionCheck, Object registrySecurityObj, RegistryOptions registryOptions) at Microsoft.Win32.RegistryKey.CreateSubKey(String subkey, RegistryKeyPermissionCheck permissionCheck) at LoadAppKeyAndModify.Program.Main(String[] args) source: class Program { static void Main(string[] args) { Console.WriteLine("INFO: Running as {0} in {1}-bit process on {2}-bit OS", new WindowsPrincipal(WindowsIdentity.GetCurrent()).IsInRole(WindowsBuiltInRole.Administrator) ? "Admin" : "Normal User", Environment.Is64BitProcess ? 64 : 32, Environment.Is64BitOperatingSystem ? 64 : 32); if (args.Length != 1) { throw new ApplicationException("Need 1 argument - path to the software hive file on disk"); } string softwareHiveFile = Path.GetFullPath(args[0]); if (File.Exists(softwareHiveFile) == false) { throw new ApplicationException("Specified file does not exist: " + softwareHiveFile); } // pick a random subkey so it doesn't already exist var keyPathToCreate = "Microsoft\\Windows\\CurrentVersion\\RunOnceEx\\" + Guid.NewGuid(); var hKey = RegistryNativeMethods.RegLoadAppKey(softwareHiveFile); using (var safeRegistryHandle = new SafeRegistryHandle(new IntPtr(hKey), true)) using (var appKey = RegistryKey.FromHandle(safeRegistryHandle)) using (var runOnceExKey = appKey.CreateSubKey(keyPathToCreate)) { runOnceExKey.SetValue("foo", "bar"); Console.WriteLine("Was able to create {0} and write a value under it", keyPathToCreate); } } } internal static class RegistryNativeMethods { [Flags] public enum RegSAM { AllAccess = 0x000f003f } private const int REG_PROCESS_APPKEY = 0x00000001; // approximated from pinvoke.net's RegLoadKey and RegOpenKey // NOTE: changed return from long to int so we could do Win32Exception on it [DllImport("advapi32.dll", SetLastError = true)] private static extern int RegLoadAppKey(String hiveFile, out int hKey, RegSAM samDesired, int options, int reserved); public static int RegLoadAppKey(String hiveFile) { int hKey; int rc = RegLoadAppKey(hiveFile, out hKey, RegSAM.AllAccess, REG_PROCESS_APPKEY, 0); if (rc != 0) { throw new Win32Exception(rc, "Failed during RegLoadAppKey of file " + hiveFile); } return hKey; } }

    Read the article

  • asp.net detailsview update method not getting new values

    - by Ali
    Hi all, I am binding a detailsview with objectdatasource which gets the select parameter from the querystring. The detailsview shows the desired record, but when I try to update it, my update method gets the old values for the record (and hence no update). here is my detailsview code: <asp:DetailsView ID="dvUsers" runat="server" Height="50px" Width="125px" AutoGenerateRows="False" DataSourceID="odsUserDetails" onitemupdating="dvUsers_ItemUpdating"> <Fields> <asp:CommandField ShowEditButton="True" /> <asp:BoundField DataField="Username" HeaderText="Username" SortExpression="Username" ReadOnly="true" /> <asp:BoundField DataField="FirstName" HeaderText="First Name" SortExpression="FirstName" /> <asp:BoundField DataField="LastName" HeaderText="Last Name" SortExpression="LastName" /> <asp:BoundField DataField="Email" runat="server" HeaderText="Email" SortExpression="Email" /> <asp:BoundField DataField="IsActive" HeaderText="Is Active" SortExpression="IsActive" /> <asp:BoundField DataField="IsOnline" HeaderText="Is Online" SortExpression="IsOnline" ReadOnly="true" /> <asp:BoundField DataField="LastLoginDate" HeaderText="Last Login" SortExpression="LastLoginDate" ReadOnly="true" /> <asp:BoundField DataField="CreateDate" HeaderText="Member Since" SortExpression="CreateDate" ReadOnly="true" /> <asp:TemplateField HeaderText="Membership Ends" SortExpression="ExpiryDate"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("ExpiryDate") %>'></asp:TextBox> <cc1:CalendarExtender ID="TextBox1_CalendarExtender" runat="server" Enabled="True" TargetControlID="TextBox1"> </cc1:CalendarExtender> </EditItemTemplate> <InsertItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("ExpiryDate") %>'></asp:TextBox> </InsertItemTemplate> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("ExpiryDate") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> </Fields> and here is the objectdatasource code: <asp:ObjectDataSource ID="odsUserDetails" runat="server" SelectMethod="GetAllUserDetailsByUserId" TypeName="QMS_BLL.Membership" UpdateMethod="UpdateUserForClient"> <UpdateParameters> <asp:Parameter Name="User_ID" Type="Int32" /> <asp:Parameter Name="firstName" Type="String" /> <asp:Parameter Name="lastName" Type="String" /> <asp:SessionParameter Name="updatedByUser" SessionField="userId" DefaultValue="1" /> <asp:Parameter Name="expiryDate" Type="DateTime" /> <asp:Parameter Name="Email" Type="String" /> <asp:Parameter Name="isActive" Type="String" /> </UpdateParameters> <SelectParameters> <asp:QueryStringParameter DefaultValue="1" Name="User_ID" QueryStringField="User_ID" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource> Is the OnItemUpdating method still required when you have your custom BLL method called on insertevent? (which is being executed fine in my case but updating with the old values) or am I missing something else? Also I tried to provide an OnItemUpdating method and in there I tried to capture the contents of the textboxes (the new values). I got an exception: "Specified argument was out of the range of valid values. Parameter name: index" when I tried to do: TextBox txtFirstName = (TextBox)dvUsers.Rows[1].Cells[1].Controls[0]; Any help will be most appreciated.

    Read the article

  • Results Delphi users who wish to use HID USB in windows

    - by Lex Dean
    Results Delphi users who wish to use HID USB in windows HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USB Contain a list if keys containing vender ID and Producer ID numbers that co inside with the USB web sites data base. These numbers and the GUID held within the key gives important information to execute the HID.dll that is otherwise imposable to execute. The Control Panel/System/Hardware/Device manager/USB Serial Bus Controllers/Mass Storage Devices/details simply lists the registry data. The access to the programmer has been documented through the API32.dll with a number of procedures that accesses the registry. But that is not the problem yet it looks like the problem!!!!!!!!! The key is info about the registry and how to use it. These keys are viewed in RegEdit.exe it’s self. Some parts of the registry like the USB have been given a windows security system type of protection with a Aurthz.dll to give the USB read and right protection. Even the api32.dll. Now only Microsoft give out these details and we all know Microsoft hate Delphi. Now C users have enjoyed this access for over 10 years now. Now some will make out that you should never give out such information because some idiot may make a stupid virus (true), but the argument is also do Delphi users need to be denied USB access for another ten years!!!!!!!!!!!!. What I do not have is the skill in is assembly code. I’m seeking for some one that can trace how regedit.exe gets its access through Aurthz.dll to access the USB data. So I’m asking all who reads this:- to partition any friend they have that has this skill to get the Aurthz.dll info needed. I find communicating with USB.org they reply when they have a positive email reply but do not bother should their email be a slightly negative policy. For all simple reasoning, all that USB had to do was to have a secure key as they have done, and to update the same data into a unsecured key every time the data is changed for USB developer to access. And not bother developers access to Aurthz.dll. Authz.dll with these functions for USB:- AuthzFreeResourceManager AuthzFreeContext AuthzAccessCheck(Flags: DWORD; AuthzClientContext: AUTHZ_CLIENT_CONTEXT_HANDLE; pRequest: PAUTHZ_ACCESS_REQUEST; AuditInfo: AUTHZ_AUDIT_INFO_HANDLE; pSecurityDescriptor: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorArray: PSECURITY_DESCRIPTOR; OptionalSecurityDescriptorCount: DWORD; //OPTIONAL, Var pReply: AUTHZ_ACCESS_REPLY; pAuthzHandle: PAUTHZ_ACCESS_CHECK_RESULTS_HANDLE): BOOl; AuthzInitializeContextFromSid(Flags: DWORD; UserSid: PSID; AuthzResourceManager: AUTHZ_RESOURCE_MANAGER_HANDLE; pExpirationTime: int64; Identifier: LUID; DynamicGroupArgs: PVOID; pAuthzClientContext: PAUTHZ_CLIENT_CONTEXT_HANDLE): BOOL; AuthzInitializeResourceManager(flags: DWORD; pfnAccessCheck: PFN_AUTHZ_DYNAMIC_ACCESS_CHECK; pfnComputeDynamicGroups: PFN_AUTHZ_COMPUTE_DYNAMIC_GROUPS; pfnFreeDynamicGroups: PFN_AUTHZ_FREE_DYNAMIC_GROUPS; ResourceManagerName: PWideChar; pAuthzResourceManager: PAUTHZ_RESOURCE_MANAGER_HANDLE): BOOL; further in Authz.h on kolers.com J Lex Dean.

    Read the article

  • Modern Java alternatives

    - by Ralph
    I'm not sure if stackoverflow is the best forum for this discussion. I have been a Java developer for 14 years and have written an enterprise-level (~500,000 line) Swing application that uses most of the standard library APIs. Recently, I have become disappointed with the progress that the language has made to "modernize" itself, and am looking for an alternative for ongoing development. I have considered moving to the .NET platform, but I have issues with using something the only runs well in Windows (I know about Mono, but that is still far behind Microsoft). I also plan on buying a new Macbook Pro as soon as Apple releases their new rumored Arrandale-based machines and want to develop in an environment that will feel "at home" in Unix/Linux. I have considered using Python or Ruby, but the standard Java library is arguably the largest of any modern language. In JVM-based languages, I looked at Groovy, but am disappointed with its performance. Rumor has it that with the soon-to-be released JDK7, with its InvokeDynamic instruction, this will improve, but I don't know how much. Groovy is also not truly a functional language, although it provides closures and some of the "functional" features on collections. It does not embrace immutability. I have narrowed my search down to two JVM-based alternatives: Scala and Clojure. Each has its strengths and weaknesses. I am looking for the stackoverflow readerships' opinions. I am not an expert at either of these languages; I have read 2 1/2 books on Scala and am currently reading Stu Halloway's book on Clojure. Scala is strongly statically typed. I know the dynamic language folks claim that static typing is a crutch for not doing unit testing, but it does provide a mechanism for compile-time location of a whole class of errors. Scala is more concise than Java, but not as much as Clojure. Scala's inter-operation with Java seems to be better than Clojure's, in that most Java operations are easier to do in Scala than in Clojure. For example, I can find no way in Clojure to create a non-static initialization block in a class derived from a Java superclass. For example, I like the Apache commons CLI library for command line argument parsing. In Java and Scala, I can create a new Options object and add Option items to it in an initialization block as follows (Java code): final Options options = new Options() { { addOption(new Option("?", "help", false, "Show this usage information"); // other options } }; I can't figure out how to the same thing in Clojure (except by using (doit...)), although that may reflect my lack of knowledge of the language. Clojure's collections are optimized for immutability. They rarely require copy-on-write semantics. I don't know if Scala's immutable collections are implemented using similar algorithms, but Rich Hickey (Clojure's inventor) goes out of his way to explain how that language's data structures are efficient. Clojure was designed from the beginning for concurrency (as was Scala) and with modern multi-core processors, concurrency takes on more importance, but I occasionally need to write simple non-concurrent utilities, and Scala code probably runs a little faster for these applications since it discourages, but does not prohibit, "simple" mutability. One could argue that one-off utilities do not have to be super-fast, but sometimes they do tasks that take hours or days to complete. I know that there is no right answer to this "question", but I thought I would open it up for discussion. If anyone has a suggestion for another JVM-based language that can be used for enterprise level development, please list it. Also, it is not my intent to start a flame war. Thanks, Ralph

    Read the article

  • Boost::Interprocess Container Container Resizing No Default Constructor

    - by CuppM
    Hi, After combing through the Boost::Interprocess documentation and Google searches, I think I've found the reason/workaround to my issue. Everything I've found, as I understand it, seems to be hinting at this, but doesn't come out and say "do this because...". But if anyone can verify this I would appreciate it. I'm writing a series of classes that represent a large lookup of information that is stored in memory for fast performance in a parallelized application. Because of the size of data and multiple processes that run at a time on one machine, we're using Boost::Interprocess for shared memory to have a single copy of the structures. I looked at the Boost::Interprocess documentation and examples, and they typedef classes for shared memory strings, string vectors, int vector vectors, etc. And when they "use" them in their examples, they just construct them passing the allocator and maybe insert one item that they've constructed elsewhere. Like on this page: http://www.boost.org/doc/libs/1_42_0/doc/html/interprocess/allocators_containers.html So following their examples, I created a header file with typedefs for shared memory classes: namespace shm { namespace bip = boost::interprocess; // General/Utility Types typedef bip::managed_shared_memory::segment_manager segment_manager_t; typedef bip::allocator<void, segment_manager_t> void_allocator; // Integer Types typedef bip::allocator<int, segment_manager_t> int_allocator; typedef bip::vector<int, int_allocator> int_vector; // String Types typedef bip::allocator<char, segment_manager_t> char_allocator; typedef bip::basic_string<char, std::char_traits<char>, char_allocator> string; typedef bip::allocator<string, segment_manager_t> string_allocator; typedef bip::vector<string, string_allocator> string_vector; typedef bip::allocator<string_vector, segment_manager_t> string_vector_allocator; typedef bip::vector<string_vector, string_vector_allocator> string_vector_vector; } Then for one of my lookup table classes, it's defined something like this: class Details { public: Details(const shm::void_allocator & alloc) : m_Ids(alloc), m_Labels(alloc), m_Values(alloc) { } ~Details() {} int Read(BinaryReader & br); private: shm::int_vector m_Ids; shm::string_vector m_Labels; shm::string_vector_vector m_Values; }; int Details::Read(BinaryReader & br) { int num = br.ReadInt(); m_Ids.resize(num); m_Labels.resize(num); m_Values.resize(num); for (int i = 0; i < num; i++) { m_Ids[i] = br.ReadInt(); m_Labels[i] = br.ReadString().c_str(); int count = br.ReadInt(); m_Value[i].resize(count); for (int j = 0; j < count; j++) { m_Value[i][j] = br.ReadString().c_str(); } } } But when I compile it, I get the error: 'boost::interprocess::allocator<T,SegmentManager>::allocator' : no appropriate default constructor available And it's due to the resize() calls on the vector objects. Because the allocator types do not have a empty constructor (they take a const segment_manager_t &) and it's trying to create a default object for each location. So in order for it to work, I have to get an allocator object and pass a default value object on resize. Like this: int Details::Read(BinaryReader & br) { shm::void_allocator alloc(m_Ids.get_allocator()); int num = br.ReadInt(); m_Ids.resize(num); m_Labels.resize(num, shm::string(alloc)); m_Values.resize(num, shm::string_vector(alloc)); for (int i = 0; i < num; i++) { m_Ids[i] = br.ReadInt(); m_Labels[i] = br.ReadString().c_str(); int count = br.ReadInt(); m_Value[i].resize(count, shm::string(alloc)); for (int j = 0; j < count; j++) { m_Value[i][j] = br.ReadString().c_str(); } } } Is this the best/correct way of doing it? Or am I missing something. Thanks!

    Read the article

  • A little bit of Ajax goes a long way

    - by Holland
    ..except when you're having problems. My problem is this: I have a hierarchical list of categories stored in a database which I wish to output in a dropdown list. The hierarchy comes into place when the subcategories are to be displayed, which are dependent on a parent id (which equals out to the first seven or so main categories listed). My thoughts are relatively simple: when the user clicks the dynamically allocated list of main categories, they are clicking on an option tag. For each option tag, an id (i.e., the parent) is listed in the value attribute, as well as an argument which is sent to a Javascript function which then uses AJAX to get the data via PHP and sends it to my 'javascript.php' file. The file then does magic, and populates the subcategory list, which is dependent on the main category selected. I believe I have the idea down, it's just that I'm implementing the solution improperly, for some reason. Here's what I have so far: from javascript.php <script type="text/javascript" src=<?=JPATH_BASE.DS.'includes'.DS.'jquery.js';?>> var ajax = { ajax.sendAjaxData = function(method, url, dataTitle, ajaxData) { $.ajax({ type: 'post', url: "C:/wamp/www/patention/components/com_adsmanagar/views/edit/tmpl/javascript.php", data: { 'data' : ajaxData }, success: function(result){ // we have the response alert("Your request was successful." + result); }, error: function(e){ alert('Error: ' + e); } }); } ajax.printSubCategoriesOnClick = function(parent) { alert("hello world!"); ajax.sendAjaxData('post', 'javascript.php', 'data' parent); <?php $categories = $this->formData->getCategories($_POST['data']); ?> ajax.printSubCategories(<?=$categories;?>); } ajax.printSubCategories = function(categories) { var select = document.getElementById("subcategories"); for (var i = 0; i < categories.length; i++) { var opt = document.createElement("option"); opt.text = categories['name']; opt.value = categories['id']; } } } </script> the function used to populate the form data function populateCategories($parent, FormData $formData) { $categories = $formData->getCategories($parent); echo "<pre>"; print_r($categories); echo "</pre>"; foreach($categories as $section => $row){ ?> <option value=<?=$row['id'];?> onclick="ajax.printSubCategoriesOnClick(<?=$row['id']?>)"> <? echo $row['name']; ?> </option> <?php } } The problem is that when I try to do a print_r on my $_POST variable, nothing shows up. I also receive an "undefined index" error (which refers to the key I set for the data type). I'm thinking that for some reason, the data is being sent to my form.php file, or my default.php file which includes both the form.php and javascript.php files via a function. Is there something specific that I'm missing here? Just looking up basic AJAX syntax (via jQuery) hasn't helped out, unfortunately.

    Read the article

  • Simple Observation in Django: How Can I Correctly Modify The `attrs` sent to __new__ of a Django Mod

    - by DGGenuine
    Hello, I'm a strong proponent of the observer pattern, and this is what I'd like to be able to do in my Django models.py: class AModel(Model): __metaclass__ = SomethingMagical @post_save(AnotherModel) @classmethod def observe_another_model_saved(klass, sender, instance, created, **kwargs): pass @pre_init('YetAnotherModel') @classmethod def observe_yet_another_model_initializing(klass, sender, *args, **kwargs): pass @post_delete('DifferentApp.SomeModel') @classmethod def observe_some_model_deleted(klass, sender, **kwargs): pass This would connect a signal with sender = the decorator's argument and receiver = the decorated method. Right now my signal connection code all exists in __init__.py which is okay, but a little unmaintainable. I want this code all in one place, the models.py file. Thanks to helpful feedback from the community I'm very close (I think.) (I'm using a metaclass solution instead of the class decorator solution in the previous question/answer because you can't set attributes on classmethods, which I need.) I am having a strange error I don't understand. At the end of my post are the contents of a models.py that you can pop into a fresh project/application to see the error. Set your database to sqlite and add the application to installed apps. This is the error: Validating models... Unhandled exception in thread started by Traceback (most recent call last): File "/Library/Python/2.6/site-packages//lib/python2.6/site-packages/django/core/management/commands/runserver.py", line 48, in inner_run File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 253, in validate raise CommandError("One or more models did not validate:\n%s" % error_text) django.core.management.base.CommandError: One or more models did not validate: local.myothermodel: 'my_model' has a relation with model MyModel, which has either not been installed or is abstract. I've indicated a few different things you can comment in/out to fix the error. First, if you don't modify the attrs sent to the metaclass's __new__, then the error does not arise. (Note even if you copy the dictionary element by element into a new dictionary, it still fails; only using the exact attrs dictionary works.) Second, if you reference the first model by class rather than by string, the error also doesn't arise regardless of what you do in __new__. I appreciate your help. I'll be githubbing the solution if and when it works. Maybe other people would enjoy a simplified way to use Django signals to observe application happenings. #models.py from django.db import models from django.db.models.base import ModelBase from django.db.models import signals import pdb class UnconnectedMethodWrapper(object): sender = None method = None signal = None def __init__(self, signal, sender, method): self.signal = signal self.sender = sender self.method = method def post_save(sender): return _make_decorator(signals.post_save, sender) def _make_decorator(signal, sender): def decorator(view): return UnconnectedMethodWrapper(signal, sender, view) return decorator class ConnectableModel(ModelBase): """ A meta class for any class that will have static or class methods that need to be connected to signals. """ def __new__(cls, name, bases, attrs): unconnecteds = {} ## NO WORK newattrs = {} for name, attr in attrs.iteritems(): if isinstance(attr, UnconnectedMethodWrapper): unconnecteds[name] = attr newattrs[name] = attr.method #replace the UnconnectedMethodWrapper with the method it wrapped. else: newattrs[name] = attr ## NO WORK # newattrs = {} # for name, attr in attrs.iteritems(): # newattrs[name] = attr ## WORKS # newattrs = attrs new = super(ConnectableModel, cls).__new__(cls, name, bases, newattrs) for name, unconnected in unconnecteds.iteritems(): _connect_signal(unconnected.signal, unconnected.sender, getattr(new, name), new._meta.app_label) return new def _connect_signal(signal, sender, receiver, default_app_label): # full implementation also accepts basestring as sender and will look up model accordingly signal.connect(sender=sender, receiver=receiver) class MyModel(models.Model): __metaclass__ = ConnectableModel @post_save('In my application this string matters') @classmethod def observe_it(klass, sender, instance, created, **kwargs): pass @classmethod def normal_class_method(klass): pass class MyOtherModel(models.Model): ## WORKS # my_model = models.ForeignKey(MyModel) ## NO WORK my_model = models.ForeignKey('MyModel')

    Read the article

  • Cocoa nextEventMatchingMask not receiving NSMouseMoved event

    - by Jonny
    Hello, I created a local event loop and showed up a borderless window (derived from NSPanel), I found in the event loop there's no NSMouseMoved event received, although I can receive Mouse button down/up events. What should I do to get the NSMouseMoved events? I found making the float window as key window can receive the NSMouseMoved events, but I don't want to change key window. And it appears this is possible, because I found after clicking the test App Icon in System Dock Bar, I can receive the mousemoved events, and the key window/mainwindow are unchanged. Here's the my test code: (Create a Cocoa App project names FloatWindowTest, and put a button to link it with the onClick: IBAction). Thanks in advance! -Jonny #import "FloatWindowTestAppDelegate.h" @interface FloatWindow : NSPanel @end @interface FloatWindowContentView : NSView @end @implementation FloatWindowTestAppDelegate @synthesize window; - (void)delayedAction:(id)sender { // What does this function do? // 1. create a float window // 2. create a local event loop // 3. print out the events got from nextEventMatchingMask. // 4. send it to float window. // What is the problem? // In local event loop, althrough the event mask has set NSMouseMovedMask // there's no mouse moved messages received. // FloatWindow* floatWindow = [[FloatWindow alloc] init]; NSEvent* event = [NSApp currentEvent]; NSPoint screenOrigin = [[self window] convertBaseToScreen:[event locationInWindow]]; [floatWindow setFrameTopLeftPoint:screenOrigin]; [floatWindow orderFront:nil]; //Making the float window as Key window will work, however //change active window is not anticipated. //[floatWindow makeKeyAndOrderFront:nil]; BOOL done = NO; while (!done) { NSAutoreleasePool* pool = [NSAutoreleasePool new]; NSUInteger eventMask = NSLeftMouseDownMask| NSLeftMouseUpMask| NSMouseMovedMask| NSMouseEnteredMask| NSMouseExitedMask| NSLeftMouseDraggedMask; NSEvent* event = [NSApp nextEventMatchingMask:eventMask untilDate:[NSDate distantFuture] inMode:NSDefaultRunLoopMode dequeue:YES]; //why I cannot get NSMouseMoved event?? NSLog(@"new event %@", [event description]); [floatWindow sendEvent:event]; [pool drain]; } [floatWindow release]; return; } -(IBAction)onClick:(id)sender { //Tried to postpone the local event loop //after return from button's mouse tracking loop. //but not fixes this problem. [[NSRunLoop currentRunLoop] performSelector:@selector(delayedAction:) target:self argument:nil order:0 modes:[NSArray arrayWithObject:NSDefaultRunLoopMode]]; } @end @implementation FloatWindow - (id)init { NSRect contentRect = NSMakeRect(200,300,200,300); self = [super initWithContentRect:contentRect styleMask:NSTitledWindowMask backing:NSBackingStoreBuffered defer:YES]; if (self) { [self setLevel:NSFloatingWindowLevel]; NSRect frameRect = [self frameRectForContentRect:contentRect]; NSView* view = [[[FloatWindowContentView alloc] initWithFrame:frameRect] autorelease]; [self setContentView:view]; [self setAcceptsMouseMovedEvents:YES]; [self setIgnoresMouseEvents:NO]; } return self; } - (BOOL)becomesKeyOnlyIfNeeded { return YES; } - (void)becomeMainWindow { NSLog(@"becomeMainWindow"); [super becomeMainWindow]; } - (void)becomeKeyWindow { NSLog(@"becomeKeyWindow"); [super becomeKeyWindow]; } @end @implementation FloatWindowContentView - (BOOL)acceptsFirstMouse:(NSEvent *)theEvent { return YES; } - (BOOL)acceptsFirstResponder { return YES; } - (id)initWithFrame:(NSRect)frameRect { self = [super initWithFrame:frameRect]; if (self) { NSTrackingArea* area; area = [[NSTrackingArea alloc] initWithRect:frameRect options:NSTrackingActiveAlways| NSTrackingMouseMoved| NSTrackingMouseEnteredAndExited owner:self userInfo:nil]; [self addTrackingArea:area]; [area release]; } return self; } - (void)drawRect:(NSRect)rect { [[NSColor redColor] set]; NSRectFill([self bounds]); } - (BOOL)becomeFirstResponder { NSLog(@"becomeFirstResponder"); return [super becomeFirstResponder]; } @end

    Read the article

  • Google App Engine - Secure Cookies

    - by tponthieux
    I'd been searching for a way to do cookie based authentication/sessions in Google App Engine because I don't like the idea of memcache based sessions, and I also don't like the idea of forcing users to create google accounts just to use a website. I stumbled across someone's posting that mentioned some signed cookie functions from the Tornado framework and it looks like what I need. What I have in mind is storing a user's id in a tamper proof cookie, and maybe using a decorator for the request handlers to test the authentication status of the user, and as a side benefit the user id will be available to the request handler for datastore work and such. The concept would be similar to forms authentication in ASP.NET. This code comes from the web.py module of the Tornado framework. According to the docstrings, it "Signs and timestamps a cookie so it cannot be forged" and "Returns the given signed cookie if it validates, or None." I've tried to use it in an App Engine Project, but I don't understand the nuances of trying to get these methods to work in the context of the request handler. Can someone show me the right way to do this without losing the functionality that the FriendFeed developers put into it? The set_secure_cookie, and get_secure_cookie portions are the most important, but it would be nice to be able to use the other methods as well. #!/usr/bin/env python import Cookie import base64 import time import hashlib import hmac import datetime import re import calendar import email.utils import logging def _utf8(s): if isinstance(s, unicode): return s.encode("utf-8") assert isinstance(s, str) return s def _unicode(s): if isinstance(s, str): try: return s.decode("utf-8") except UnicodeDecodeError: raise HTTPError(400, "Non-utf8 argument") assert isinstance(s, unicode) return s def _time_independent_equals(a, b): if len(a) != len(b): return False result = 0 for x, y in zip(a, b): result |= ord(x) ^ ord(y) return result == 0 def cookies(self): """A dictionary of Cookie.Morsel objects.""" if not hasattr(self,"_cookies"): self._cookies = Cookie.BaseCookie() if "Cookie" in self.request.headers: try: self._cookies.load(self.request.headers["Cookie"]) except: self.clear_all_cookies() return self._cookies def _cookie_signature(self,*parts): self.require_setting("cookie_secret","secure cookies") hash = hmac.new(self.application.settings["cookie_secret"], digestmod=hashlib.sha1) for part in parts:hash.update(part) return hash.hexdigest() def get_cookie(self,name,default=None): """Gets the value of the cookie with the given name,else default.""" if name in self.cookies: return self.cookies[name].value return default def set_cookie(self,name,value,domain=None,expires=None,path="/", expires_days=None): """Sets the given cookie name/value with the given options.""" name = _utf8(name) value = _utf8(value) if re.search(r"[\x00-\x20]",name + value): # Don't let us accidentally inject bad stuff raise ValueError("Invalid cookie %r:%r" % (name,value)) if not hasattr(self,"_new_cookies"): self._new_cookies = [] new_cookie = Cookie.BaseCookie() self._new_cookies.append(new_cookie) new_cookie[name] = value if domain: new_cookie[name]["domain"] = domain if expires_days is not None and not expires: expires = datetime.datetime.utcnow() + datetime.timedelta( days=expires_days) if expires: timestamp = calendar.timegm(expires.utctimetuple()) new_cookie[name]["expires"] = email.utils.formatdate( timestamp,localtime=False,usegmt=True) if path: new_cookie[name]["path"] = path def clear_cookie(self,name,path="/",domain=None): """Deletes the cookie with the given name.""" expires = datetime.datetime.utcnow() - datetime.timedelta(days=365) self.set_cookie(name,value="",path=path,expires=expires, domain=domain) def clear_all_cookies(self): """Deletes all the cookies the user sent with this request.""" for name in self.cookies.iterkeys(): self.clear_cookie(name) def set_secure_cookie(self,name,value,expires_days=30,**kwargs): """Signs and timestamps a cookie so it cannot be forged""" timestamp = str(int(time.time())) value = base64.b64encode(value) signature = self._cookie_signature(name,value,timestamp) value = "|".join([value,timestamp,signature]) self.set_cookie(name,value,expires_days=expires_days,**kwargs) def get_secure_cookie(self,name,include_name=True,value=None): """Returns the given signed cookie if it validates,or None""" if value is None:value = self.get_cookie(name) if not value:return None parts = value.split("|") if len(parts) != 3:return None if include_name: signature = self._cookie_signature(name,parts[0],parts[1]) else: signature = self._cookie_signature(parts[0],parts[1]) if not _time_independent_equals(parts[2],signature): logging.warning("Invalid cookie signature %r",value) return None timestamp = int(parts[1]) if timestamp < time.time() - 31 * 86400: logging.warning("Expired cookie %r",value) return None try: return base64.b64decode(parts[0]) except: return None uid=1234|1234567890|d32b9e9c67274fa062e2599fd659cc14 Parts: 1. uid is the name of the key 2. 1234 is your value in clear 3. 1234567890 is the timestamp 4. d32b9e9c67274fa062e2599fd659cc14 is the signature made from the value and the timestamp

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >