Search Results

Search found 3075 results on 123 pages for 'meta blogging'.

Page 102/123 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Announcing Windows Azure Mobile Services

    - by ScottGu
    I’m excited to announce a new capability we are adding to Windows Azure today: Windows Azure Mobile Services Windows Azure Mobile Services makes it incredibly easy to connect a scalable cloud backend to your client and mobile applications.  It allows you to easily store structured data in the cloud that can span both devices and users, integrate it with user authentication, as well as send out updates to clients via push notifications. Today’s release enables you to add these capabilities to any Windows 8 app in literally minutes, and provides a super productive way for you to quickly build out your app ideas.  We’ll also be adding support to enable these same scenarios for Windows Phone, iOS, and Android devices soon. Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services.  Or watch this video of me showing how to do it step by step. Getting Started If you don’t already have a Windows Azure account, you can sign up for a no-obligation Free Trial.  Once you are signed-up, click the “preview features” section under the “account” tab of the www.windowsazure.com website and enable your account to support the “Mobile Services” preview.   Instructions on how to enable this can be found here. Once you have the mobile services preview enabled, log into the Windows Azure Portal, click the “New” button and choose the new “Mobile Services” icon to create your first mobile backend.  Once created, you’ll see a quick-start page like below with instructions on how to connect your mobile service to an existing Windows 8 client app you have already started working on, or how to create and connect a brand-new Windows 8 client app with it: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app  that stores data in Windows Azure. Storing Data in the Cloud Storing data in the cloud with Windows Azure Mobile Services is incredibly easy.  When you create a Windows Azure Mobile Service, we automatically associate it with a SQL Database inside Windows Azure.  The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it (using secure REST end-points utilizing a JSON-based ODATA format) – without you having to write or deploy any custom server code.  Built-in management support is provided within the Windows Azure portal for creating new tables, browsing data, setting indexes, and controlling access permissions. This makes it incredibly easy to connect client applications to the cloud, and enables client developers who don’t have a server-code background to be productive from the very beginning.  They can instead focus on building the client app experience, and leverage Windows Azure Mobile Services to provide the cloud backend services they require.  Below is an example of client-side Windows 8 C#/XAML code that could be used to query data from a Windows Azure Mobile Service.  Client-side C# developers can write queries like this using LINQ and strongly typed POCO objects, which are then translated into HTTP REST queries that run against a Windows Azure Mobile Service.   Developers don’t have to write or deploy any custom server-side code in order to enable client-side code below to execute and asynchronously populate their client UI: Because Mobile Services is part of Windows Azure, developers can later choose to augment or extend their initial solution and add custom server functionality and more advanced logic if they want.  This provides maximum flexibility, and enables developers to grow and extend their solutions to meet any needs. User Authentication and Push Notifications Windows Azure Mobile Services also make it incredibly easy to integrate user authentication/authorization and push notifications within your applications.  You can use these capabilities to enable authentication and fine grain access control permissions to the data you store in the cloud, as well as to trigger push notifications to users/devices when the data changes.  Windows Azure Mobile Services supports the concept of “server scripts” (small chunks of server-side script that executes in response to actions) that make it really easy to enable these scenarios. Below are some tutorials that walkthrough common authentication/authorization/push scenarios you can do with Windows Azure Mobile Services and Windows 8 apps: Enabling User Authentication Authorizing Users  Get Started with Push Notifications Push Notifications to multiple Users Manage and Monitor your Mobile Service Just like with every other service in Windows Azure, you can monitor usage and metrics of your mobile service backend using the “Dashboard” tab within the Windows Azure Portal. The dashboard tab provides a built-in monitoring view of the API calls, Bandwidth, and server CPU cycles of your Windows Azure Mobile Service.   You can also use the “Logs” tab within the portal to review error messages.  This makes it easy to monitor and track how your application is doing. Scale Up as Your Business Grows Windows Azure Mobile Services now allows every Windows Azure customer to create and run up to 10 Mobile Services in a free, shared/multi-tenant hosting environment (where your mobile backend will be one of multiple apps running on a shared set of server resources).  This provides an easy way to get started on projects at no cost beyond the database you connect your Windows Azure Mobile Service to (note: each Windows Azure free trial account also includes a 1GB SQL Database that you can use with any number of apps or Windows Azure Mobile Services). If your client application becomes popular, you can click the “Scale” tab of your Mobile Service and switch from “Shared” to “Reserved” mode.  Doing so allows you to isolate your apps so that you are the only customer within a virtual machine.  This allows you to elastically scale the amount of resources your apps use – allowing you to scale-up (or scale-down) your capacity as your traffic grows: With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need.  This enables a super flexible model that is ideal for new mobile app scenarios, as well as startups who are just getting going.  Summary I’ve only scratched the surface of what you can do with Windows Azure Mobile Services – there are a lot more features to explore.  With Windows Azure Mobile Services you’ll be able to build mobile app experiences faster than ever, and enable even better user experiences – by connecting your client apps to the cloud. Visit the Windows Azure Mobile Services development center to learn more, and build your first Windows 8 app connected with Windows Azure today.  And read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple Windows 8 “Todo List” app that is cloud enabled using Windows Azure Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • CodePlex Daily Summary for Thursday, April 15, 2010

    CodePlex Daily Summary for Thursday, April 15, 2010New ProjectsApplication Logging Repository (ALR): The ALR is a light-weight logging framework that allows applications to log events and exceptions to a central repository.Arkane.FileProperties.DSS: Arkane.FileProperties.Dss is a library for parsing the file header of a .DSS file (as used by Olympus digital dictaphone systems) to obtain time, v...B in conTrol project: This project enables controling log-in and locking your workstation automatically, identifyng you bluetooth.DarkBook: DarkBook is a personal library project.Direct2D for Microsoft .Net: Direct2D, DirectWrite and Windows Imaging wrappers for .Net. This library allows to access Direct2D, DirectWrite and Windows Imaging Windows API f...DJ Ware: DJ Ware is an extensible music player with plugin support and innovative features to organize and explore music files. It is developed with C#, WPF...gpsMe: gpsMe is a Windows Mobile 6.x mapping solution allowing to place the user on a personnalized map. The screen requirements are VGA or WVGA but, you ...jErrorLog: jErrorLog is an error logging component for use in DotNet 2.0 or later applications. It can log error messages to any of the following: database, e...KEMET_API: Java Library (open - source). This library is a help to study egyptian hieroglyphs.Meadow: A web site project for a Swedish floorball team called Slackers. Home page built with ASP.NET 2.0, ASP.NET AJAX and SQL Server 2005.Mustang Math: Mustang Math makes it easier for young children to practice basic math facts on the computer. No keyboard or mouse required - just say the answer!...Net.Formats.oEmbed: oEmbed format implementation in c#. oEmbed is a format for allowing an embedded representation of a URL on third party sites. The simple API allows...Normlize O/R Mapper: Open source O/RM tool that participates with traditional inheritance object models as well as Hibernate/nHibernate style class shells. As I have t...N-Twill Twitter Client for VB.NET: Proyecto de cliente twitter hecho con la libreria TwitterVB2 y hecho en VB.net 2008.SIQM: Spatial Information Quality Management Toolset TIMETABLEASY Web: Under developmentTweetSharp: TweetSharp is a complete .NET library for micro-blogging platforms that allows you to write short and sweet expressions that fly to Twitter, Yammer...UISandbox: UISandbox is a sample C# source code showing how to deal with plugins requiring sandbox, when those plugins must interact with WPF application inte...WinForm SharePoint Web Part Manager: The SharePoint Web Part Manager is a WinForm tool using the SharePoint object model that enables developers and power users to add, update, delete,...WoW Character Viewer: View your World of Warcraft character (or anyone else's character), using this application. Written using Visual Basic Express 2008, then ported t...Xrns2XMod: Xrns2XMod converts from Renoise format (xrns) to mod or xm, which are more compatible formats playable from xmplay or vlc.New ReleasesArkane.FileProperties.DSS: 1.0 stable release: Executables and merge module for 1.0. (See documentation.)Bluetooth Radar: Version 2.0: Add IrDA reference for Bluetooth sending using Obex Add Project icon Add Bluetooth detection mode (Auto close application is there is no blueto...BUtil: BUtil 5.0 Alpha: Backup tasks adding.... in progressChronos WPF: Chronos v1.0 RC 1: Chronos v1.0 RC 1. Development will be feature frozen after this release, only bug fixes will be allowed. Updated nRoute assembly to v0.4 (http:...clipShow: Version 2.5: Release that addresses the canonical syntax issues in search discoverd by Tschachim (thanks again!). Also, the play list and play all menu items s...DarkBook: DarkBook alpha: Hi, here comes the alpha version of Darkbook. It has all the functions already but is still in developing. I hope it's helpful for you, at least it...DirectQ: Release 1.8.3a: Improvements to 1.8.2, which will be shortly be removed. This replaces the original 1.8.3 release from earlier today which had some late-breaking ...Effect Custom Tool for Visual Studio: Effect Custom Tool v1.1: Effect Custom Tool for Visual Studio is a visual studio 2008 extension that helps you generate c# classes from effect (*.fx) files for use with Xna...Folder Bookmarks: Folder Bookmarks 1.4.3: This is the latest version of Folder Bookmarks (1.4.3), with general improvements. It has an installer - it will create a directory 'CPascoe' in My...gpsMe: gpsMe v0.3: Required Hardware Windows Mobile 6 .Net Compact Framework 3.5 integrated gps device VGA or WVGA screen (normally works on others)IST435: Lab 4 - Enterprise Level CMS with DotNetNuke: Lab 4 - Enterprise Level CMS with DotNetNukeThis is the "starter kit" that you must base your Lab 4 on. This lab must be completed in-class.Mouse Jiggler: MouseJiggle-1.1: 1.1 release of Mouse Jiggler, now with x64 compatibility and the ability to start jiggling on run with the --jiggle or -j command-line switch.Mustang Math: MustangMath.exe: This is a quick and dirty "0.1" prototype to demonstrate the speech recognition idea. It starts asking you questions automatically on launch and k...MvcContrib: a Codeplex Foundation project: 2.0.36.0 for MVC2 (RTW): Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcC...Nito.LINQ: Beta (v0.3): New features for this release: Several new supported platforms (see below). PDBs that are source-indexed to the appropriate CodePlex changeset. ...OpenIdPortableArea: 0.1.0.2 OpenIdPortableArea: OpenIdPortableArea.Release: DotNetOpenAuth.dll DotNetOpenAuth.xml MvcContrib.dll MvcContrib.xml OpenIdPortableArea.dll OpenIdPortableAre...PokeIn Comet Ajax Library: PokeIn Sample with Library v0.2: New version of PokeIn library with sample. v0.2 There are new features in this release and no bug detected yet.Project Tru Tiên: Elements-test V1-fix (v2): Là EL test được fix tiếp theo bản fix V1, tạm gọi đây là bản fix V2 của ELtest Trong bản fix này EL được fix thêm vụ Quest, Quest chỉnh sửa đúng t...Rule 18 - Love your clipboard: Rule 18: This is the third public beta for the first version of Rule 18. This version has been updated to support Visual Studio 2010 RTM and .NET 4.0 RTM. ...SevenZipSharp: SevenZipSharp 0.62: Added: Extraction from SFX archives. Now it is possible to unrar RAR self-extractors, unzip ZIP self-extractors, etc. Extraction from DOC, XLS, (...SharePoint Labs: SPLab3001A-FRA-Level200: SPLab3001A-FRA-Level200 This SharePoint Lab will teach the persistence object layer that SharePoint uses to centraly store configuration data and o...TTXPathNavigator: TTXPathNavigator for VS2010: Version for Visual Studio 2010turing machine simulator: SDS: SDS documentVecDraw: VecDraw_0.2.25: Alpha release for test purposesWinForm SharePoint Web Part Manager: Beta 1: First release of the WinForm SharePoitn web part manager toolXrns2XMod: Xrns2XMod 0.5.1: Mod and XM conversion format - No sample data conversion at momentZip Solution: ZipSolution 5.3: Features: 1. Added WaitMsec for visual studio support with getting access to files in post build event; 2. Added ShowTextInToolbars to app.config ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesPHPExcelpatterns & practices – Enterprise LibraryMost Active ProjectsRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationFarseer Physics EngineIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web ServicesDotRasFacebook Developer Toolkit

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Creating vCard action result

    - by DigiMortal
    I added support for vCards to one of my ASP.NET MVC applications. I worked vCard support out as very simple and intelligent solution that fits perfectly to ASP.NET MVC applications. In this posting I will show you how to send vCards out as response to ASP.NET MVC request. We need three things: some vCard class, vCard action result, controller method to test vCard action result. Everything is very simple, let’s get hands on. vCard class As first thing we need vCard class. Last year I introduced vCard class that supports also images. Let’s take this class because it is easy to use and some dirty work is already done for us. NB! Take a look at ASP.NET example in the blog posting referred above. We need it later when we close the topic. Now think about how useful blogging and information sharing with others can be. With this class available at public I saved pretty much time now. :) vCardResult As we have vCard it is now time to write action result that we can use in our controllers. Here’s the code. public class vCardResult : ActionResult {     private vCard _card;       protected vCardResult() { }       public vCardResult(vCard card)     {         _card = card;     }       public override void ExecuteResult(ControllerContext context)     {         var response = context.HttpContext.Response;         response.ContentType = "text/vcard";         response.AddHeader("Content-Disposition", "attachment; fileName=" + _card.FirstName + " " + _card.LastName + ".vcf");           var cardString = _card.ToString();         var inputEncoding = Encoding.Default;         var outputEncoding = Encoding.GetEncoding("windows-1257");         var cardBytes = inputEncoding.GetBytes(cardString);           var outputBytes = Encoding.Convert(inputEncoding,                                 outputEncoding, cardBytes);           response.OutputStream.Write(outputBytes, 0, outputBytes.Length);     } } And we are done. Some notes: vCard is sent to browser as downloadable file (user can save or open it with Outlook or any other e-mail client that supports vCards), File name is made of first and last name of contact. Encoding is important because Outlook may not understand vCards otherwise (don’t know if this problem is solved in Outlook 2010). Using vCardResult in controller Now let’s tale a look at simple controller method that accepts person ID and returns vCardResult. public class ContactsController : Controller {       // ... other controller methods ...       public vCardResult vCard(int id)     {         var person = _partyRepository.GetPersonById(id);         var card = new vCard                 {                     FirstName=person.FirstName,                     LastName = person.LastName,                     StreetAddress = person.StreetAddress,                     City = person.City,                     CountryName = person.Country.Name,                       Mobile = person.Mobile,                     Phone = person.Phone,                     Email = person.Email,                 };           return new vCardResult(card);     } } Now you can run Visual Studio and check out how your vCard is moving from your web application to your e-mail client. Conclusion We took old code that worked well with ASP.NET Forms and we divided it into action result and controller method that uses vCard as bridge between our controller and action result. All functionality is located where it should be and we did nothing complex. We wrote only couple of lines of very easy code to achieve our goal. Do you understand now why I love ASP.NET MVC? :)

    Read the article

  • Announcing: Improvements to the Windows Azure Portal

    - by ScottGu
    Earlier today we released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Service Bus Management and Monitoring Support for Managing Co-administrators Import/Export support for SQL Databases Virtual Machine Experience Enhancements Improved Cloud Service Status Notifications Media Services Monitoring Support Storage Container Creation and Access Control Support All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Service Bus Management and Monitoring The new Windows Azure Management Portal now supports Service Bus management and monitoring. Service Bus provides rich messaging infrastructure that can sit between applications (or between cloud and on-premise environments) and allow them to communicate in a loosely coupled way for improved scale and resiliency. With the new Service Bus experience, you can now create and manage Service Bus Namespaces, Queues, Topics, Relays and Subscriptions. You can also get rich monitoring for Service Bus Queues, Topics and Subscriptions. To create a Service Bus namespace, you can now select the “Service Bus” tab in the Windows Azure portal and then simply select the CREATE command: Doing so will bring up a new “Create a Namespace” dialog that allows you to name and create a new Service Bus Namespace: Once created, you can obtain security credentials associated with the Namespace via the ACCESS KEY command. This gives you the ability to obtain the connection string associated with the service namespace. You can copy and paste these values into any application that requires these credentials: It is also now easy to create Service Bus Queues and Topics via the NEW experience in the portal drawer.  Simply click the NEW command and navigate to the “App Services” category to create a new Service Bus entity: Once you provision a new Queue or Topic it can be managed in the portal.  Clicking on a namespace will display all queues and topics within it: Clicking on an item in the list will allow you to drill down into a dashboard view that allows you to monitor the activity and traffic within it, as well as perform operations on it. For example, below is a view of an “orders” queue – note how we now surface both the incoming and outgoing message flow rate, as well as the total queue length and queue size: To monitor pub/sub subscriptions you can use the ADD METRICS command within a topic and select a specific subscription to monitor. Support for Managing Co-Administrators You can now add co-administrators for your Windows Azure subscription using the new Windows Azure Portal. This allows you to share management of your Windows Azure services with other users. Subscription co-administrators share the same administrative rights and permissions that service administrator have - except a co-administrator cannot change or view billing details about the account, nor remove the service administrator from a subscription. In the SETTINGS section, click on the ADMINISTRATORS tab, and select the ADD button to add a co-administrator to your subscription: To add a co-administrator, you specify the email address for a Microsoft account (formerly Windows Live ID) or an organizational account, and choose the subscription you want to add them to: You can later update the subscriptions that the co-administrator has access to by clicking on the EDIT button, and then select or deselect the subscriptions to which they belong. Import/Export Support for SQL Databases The Windows Azure administration portal now supports importing and exporting SQL Databases to/from Blob Storage.  Databases can be imported/exported to blob storage using the same BACPAC file format that is supported with SQL Server 2012.  Among other benefits, this makes it easy to copy and migrate databases between on-premise and cloud environments. SQL Databases now have an EXPORT command in the bottom drawer that when pressed will prompt you to save your database to a Windows Azure storage container: The UI allows you to choose an existing storage account or create a new one, as well as the name of the BACPAC file to persist in blob storage: You can also now import and create a new SQL Database by using the NEW command.  This will prompt you to select the storage container and file to import the database from: The Windows Azure Portal enables you to monitor the progress of import and export operations. If you choose to log out of the portal, you can come back later and check on the status of all of the operations in the new history tab of the SQL Database server – this shows your entire import and export history and the status (success/fail) of each: Enhancements to the Virtual Machine Experience One of the common pain-points we have heard from customers using the preview of our new Virtual Machine support has been the inability to delete the associated VHDs when a VM instance (or VM drive) gets deleted. Prior to today’s release the VHDs would continue to be in your storage account and accumulate storage charges. You can now navigate to the Disks tab within the Virtual Machine extension, select a VM disk to delete, and click the DELETE DISK command: When you click the DELETE DISK button you have the option to delete the disk + associated .VHD file (completely clearing it from storage).  Alternatively you can delete the disk but still retain a .VHD copy of it in storage. Improved Cloud Service Status Notifications The Windows Azure portal now exposes more information of the health status of role instances.  If any of the instances are in a non-running state, the status at the top of the dashboard will summarize the status (and update automatically as the role health changes): Clicking the instance hyperlink within this status summary view will navigate you to a detailed role instance view, and allow you to get more detailed health status of each of the instances.  The portal has been updated to provide more specific status information within this detailed view – giving you better visibility into the health of your app: Monitoring Support for Media Services Windows Azure Media Services allows you to create media processing jobs (for example: encoding media files) in your Windows Azure Media Services account. In the Windows Azure Portal, you can now monitor the number of encoding jobs that are queued up for processing as well as active, failed and queued tasks for encoding jobs. On your media services account dashboard, you can visualize the monitoring data for last 6 hours, 24 hours or 7 days. Storage Container Creation and Access Control Support You can now create Windows Azure Storage storage containers from within the Windows Azure Portal.  After selecting a storage account, you can navigate to the CONTAINERS tab and click the ADD CONTAINER command: This will display a dialog that lets you name the new container and control access to it: You can also update the access setting as well as container metadata of existing containers by selecting one and then using the new EDIT CONTAINER command: This will then bring up the edit container dialog that allows you to change and save its settings: In addition to creating and editing containers, you can click on them within the portal to drill-in and view blobs within them.  Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming later this month – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5, and support .NET 4.5 with Websites).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Day 3 - XNA: Hacking around with images

    - by dapostolov
    Yay! Today I'm going to get into some code! My mind has been on this all day! I find it amusing how I practice, daily, to be "in the moment" or "present" and the excitement and anticipation of this project seems to snatch it away from me frequently. WELL!!! (Shakes Excitedly) Let's do this =)! Let's code! For these next few days it is my intention to better understand image rendering using XNA; after said prototypes are complete I should (fingers crossed) be able to dive into my game code using the design document I hammered out the other night. On a personal note, I think the toughest thing right now is finding the time to do this project. Each night, after my little ones go to bed I can only really afford a couple hours of work on this project. However, I hope to utilise this time as best as I can because this is the first time in a while I've found a project that I've been passionate about. A friend recently asked me if I intend to go 3D or extend the game design. Yes. For now I'm keeping it simple. Lastly, just as a note, as I was doing some further research into image rendering this morning I came across some other XNA content and lessons learned. I believe this content could have probably been posted in the first couple of posts, however, I will share the new content as I learn it at the end of each day. Maybe I'll take some time later to fix the posts but for now Installation and Deployment - Lessons Learned I had installed the XNA studio  (Day 1) and the site instructions were pretty easy to follow. However, I had a small difficulty with my development environment. You see, I run a virtual desktop development environment. Even though I was able to code and compile all the tutorials the game failed to run...because I lacked a 3D capable card; it was not detected on the virtual box... First Lesson: The XNA runtime needs to "see" the 3D card! No sweat, Il copied the files over to my parent box and executed the program. ERROR. Hmm... Second Lesson (which I should have probably known but I let the excitement get the better of me): you need the XNA runtime on the client PC to run the game, oh, and don't forget the .Net Runtime! Sprite, it ain't just a Soft Drink... With these prototypes I intend to understand and perform the following tasks. learn game development terminology how to place and position (rotate) a static image on the screen how to layer static images on the screen understand image scaling can we reuse images? understand how framerate is handled in XNA how to display text , basic shapes, and colors on the screen how to interact with an image (collision of user input?) how to animate an image and understand basic animation techniques how to detect colliding images or screen edges how to manipulate the image, lets say colors, stretching how to focus on a segment of an image...like only displaying a frame on a film reel what's the best way to manage images (compression, storage, location, prevent artwork theft, etc.) Well, let's start with this "prototype" task list for now...Today, let's get an image on the screen and maybe I can mark a few of the tasks as completed... C# Prototype1 New Visual Studio Project Select the XNA Game Studio 3.1 Project Type Select the Windows Game 3.1 Template Type Prototype1 in the Name textbox provided Press OK. At this point code has auto-magically been created. Feel free to press the F5 key to run your first XNA program. You should have a blue screen infront of you. Without getting into the nitty gritty right, the code that was generated basically creates some basic code to clear the window content with the lovely CornFlowerBlue color. Something to notice, when you move your mouse into the window...nothing. ooooo spoooky. Let's put an image on that screen! Step A - Get an Image into the solution Under "Content" in your Solution Explorer, right click and add a new folder and name it "Sprites". Copy a small image in there; I copied a "Royalty Free" wizard hat from a quick google search and named it wizards_hat.jpg (rightfully so!) Step B - Add the sprite and position fields Now, open/edit  Game1.cs Locate the following line:  SpriteBatch spriteBatch; Under this line type the following:         SpriteBatch spriteBatch; // the line you are looking for...         Texture2D sprite;         Vector2 position; Step C - Load the image asset Locate the "Load Content" Method and duplicate the following:             protected override void LoadContent()         {             spriteBatch = new SpriteBatch(GraphicsDevice);             // your image name goes here...             sprite = Content.Load<Texture2D>("Sprites\\wizards_hat");             position = new Vector2(200, 100);             base.LoadContent();         } Step D - Draw the image Locate the "Draw" Method and duplicate the following:        protected override void Draw(GameTime gameTime)         {             GraphicsDevice.Clear(Color.CornflowerBlue);             spriteBatch.Begin(SpriteBlendMode.AlphaBlend);             spriteBatch.Draw(sprite, position, Color.White);             spriteBatch.End();             base.Draw(gameTime);         }  Step E - Compile and Run Engage! (F5) - Debug! Your image should now display on a cornflowerblue window about 200 pixels from the left and 100 pixels from the top. Awesome! =) Pretty cool how we only coded a few lines to display an image, but believe me, there is plenty going on behind the scenes. However, for now, I'm going to call it a night here. Blogging all this progress certainly takes time... However, tomorrow night I'm going to detail what we just did, plus start checking off points on that list! I'm wondering right now if I should add pictures / code to this post...let me know if you want them =) Best Regards, D.

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • Communication software wanted: email, sms, IM, phone calls [closed]

    - by user63835
    I am searching for a software solution that integrates / unifies my communication. I use email, instant messaging, SMS and phone. I would like to get all emails, SMS, instant messaging dialogs and meta-data about phone calls into one application. Important is that I can access all past communication with one application. There should be a global address-book to map the communication data to persons or organizations. I want all the communication data in one place to access and backup it easily. The software solution is not required to be multi-user application or server application. It is just for one user (me) only, but server or multi-user applications are not excluded. I may run it on a server hardware. It should run on Linux (Lubuntu / Ubuntu prefered). Free and OpenSource software is prefered. It would be nice if I could perform new communication (like writing a new email, sms, etc.) with one application, but that is not a must have requirement. I could also work with different applications dedicated for different types of communication like IM-application for IM and email-application for email, if all that communication data from the specialized applications will be delivered to one single place where I can access and backup it. I have an android phone and currently I am using Google contacts as the address-book. In the long term this may change, to get back the control over my data. I did some Internet search but did not find a nice solution, yet. If I am looking for unified messaging and unified communication, am I on the right track? The current Thunderbird version has IM functionality integrated. Did not try it, yet. For SMS it may be possible to use an app to send every SMS (incoming and outgoing) as an email, but I am not sure if those SMS-emails can be mapped to an address-book contact. I don't remember exactly, but isn't there a Google android app (I think Google voice) integrating SMS into google services? But in Germany this function has not been released, yet. Maybe a groupware solution would solve the requirements, but I don't have much experience with it. As communication possibilites are groing, I am woundering that there seems to be such a big gap of solutions. I can't believe I am the only one who would like solution, better integrating all the communication channels more easily. If you know a software solution that solves these requirements (partly) I would be glad if you tell me about it. Thanks in advance.

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Windows 7 Bluescreen: IRQ_NOT_LESS_OR_EQUAL | athrxusb.sys

    - by wretrOvian
    I'd left my system on last night, and found the bluescreen in the morning. This has been happening occasionally, over the past few days. Details: ================================================== Dump File : 022710-18236-01.dmp Crash Time : 2/27/2010 8:46:44 AM Bug Check String : DRIVER_IRQL_NOT_LESS_OR_EQUAL Bug Check Code : 0x000000d1 Parameter 1 : 00000000`00001001 Parameter 2 : 00000000`00000002 Parameter 3 : 00000000`00000000 Parameter 4 : fffff880`06b5c0e1 Caused By Driver : athrxusb.sys Caused By Address : athrxusb.sys+760e1 File Description : Product Name : Company : File Version : Processor : x64 Computer Name : Full Path : C:\Windows\minidump\022710-18236-01.dmp Processors Count : 2 Major Version : 15 Minor Version : 7600 ================================================== HiJackThis ("[...]" indicates removed text; full log [posted to pastebin][1]): Logfile of Trend Micro HijackThis v2.0.2 Scan saved at 8:49:15 AM, on 2/27/2010 Platform: Unknown Windows (WinNT 6.01.3504) MSInternet Explorer: Internet Explorer v8.00 (8.00.7600.16385) Boot mode: Normal Running processes: C:\Windows\DAODx.exe C:\Program Files (x86)\Asus\EPU\EPU.exe C:\Program Files\Asus\TurboV\TurboV.exe C:\Program Files (x86)\PowerISO\PWRISOVM.EXE C:\Program Files (x86)\OpenOffice.org 3\program\soffice.exe C:\Program Files (x86)\OpenOffice.org 3\program\soffice.bin D:\Downloads\HijackThis.exe C:\Program Files (x86)\uTorrent\uTorrent.exe R1 - HKCU\Software\Microsoft\Internet Explorer\[...] [...] O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [HDAudDeck] C:\Program Files (x86)\VIA\VIAudioi\VDeck\VDeck.exe -r O4 - HKLM\..\Run: [StartCCC] "C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static\CLIStart.exe" MSRun O4 - HKLM\..\Run: [TurboV] "C:\Program Files\Asus\TurboV\TurboV.exe" O4 - HKLM\..\Run: [PWRISOVM.EXE] C:\Program Files (x86)\PowerISO\PWRISOVM.EXE O4 - HKLM\..\Run: [googletalk] C:\Program Files (x86)\Google\Google Talk\googletalk.exe /autostart O4 - HKLM\..\Run: [AdobeCS4ServiceManager] "C:\Program Files (x86)\Common Files\Adobe\CS4ServiceManager\CS4ServiceManager.exe" -launchedbylogin O4 - HKCU\..\Run: [uTorrent] "C:\Program Files (x86)\uTorrent\uTorrent.exe" O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O4 - Startup: OpenOffice.org 3.1.lnk = C:\Program Files (x86)\OpenOffice.org 3\program\quickstart.exe O13 - Gopher Prefix: O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: AMD External Events Utility - Unknown owner - C:\Windows\system32\atiesrxx.exe (file missing) O23 - Service: Asus System Control Service (AsSysCtrlService) - Unknown owner - C:\Program Files (x86)\Asus\AsSysCtrlService\1.00.02\AsSysCtrlService.exe O23 - Service: DeviceVM Meta Data Export Service (DvmMDES) - DeviceVM - C:\Asus.SYS\config\DVMExportService.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: ESET HTTP Server (EhttpSrv) - ESET - C:\Program Files\ESET\ESET NOD32 Antivirus\EHttpSrv.exe O23 - Service: ESET Service (ekrn) - ESET - C:\Program Files\ESET\ESET NOD32 Antivirus\x86\ekrn.exe O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: FLEXnet Licens

    Read the article

  • Tomcat can't talk to MySql after outage

    - by gav
    I missed a payment for my server and hey suspended my account for a day or so. When they brought the server back up all my data was in tact but for some reason Tomcat can't make a JDBC connection to my MySql server. They both run on the same machine and hence I have a bind address of 127.0.0.1. It's strange because I have reset the machine of my own accord before without issue but clearly something has been reset in the downtime. I followed this guide (Just the bits which don't concern S3, I am not on Amazon infrastructure) originally and everything worked as expected. I'm very new to being a SysAdmin and I'm not sure what to try, how would you go about diagnosing this issue? The stack trace I get is as follows; INFO: Deploying web application archive myapp-1.1.war 2010-05-26 22:07:22,221 [main] ERROR context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'messageSource': Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'transactionManager': Cannot resolve reference to bean 'sessionFactory' while setting bean property 'sessionFactory'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'sessionFactory': Cannot resolve reference to bean 'hibernateProperties' while setting bean property 'hibernateProperties'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'hibernateProperties': Cannot resolve reference to bean 'dialectDetector' while setting bean property 'properties' with key [hibernate.dialect]; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dialectDetector': Invocation of init method failed; nested exception is org.springframework.jdbc.support.MetaDataAccessException: Could not get Connection for extracting meta data; nested exception is org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.codehaus.groovy.grails.commons.spring.ReloadAwareAutowireCapableBeanFactory.doCreateBean(ReloadAwareAutowireCapableBeanFactory.java:129) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.context.support.AbstractApplicationContext.initMessageSource(AbstractApplicationContext.java:714) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:404) at org.codehaus.groovy.grails.commons.spring.GrailsWebApplicationContext.refresh(GrailsWebApplicationContext.java:153) ... I get this error for a number of 'beans'. If I type mysql at my command prompt then I can easily login with the same credentials as my grails app which uses GORM and Hibernate to persist objects to the DB. I might not have given enough info to start with but I'm really interested to learn and will certainly provide it if asked, I just really don't know where to start on this one. Thanks, Gav

    Read the article

  • CommunicationException when shutting down JBoss 4.2.2

    - by Brian
    I have deployed an application using JBoss 4.2.2 on a 64-bit RHEL5 server. Since there are other JBoss servers, I had to change some port configurations so that there would be no conflicts when starting the server. So right now I'm using ports-01 from the sample-bindings.xml file that came in the docs/examples/binding-manager/samples directory. In addition, below is a list of all the files I've edited to reflect the new ports: JBOSS_HOME/servers/default/deploy/jboss-web.deployer/server.xml: Changed Connector port - 8080 to 8180 Changed AJP 1.3 Connector port - 8009 to 8109 JBOSS_HOME/server/default/deploy/jbossws.beans/META-INF/jboss-beans.xml Changed 8080 to 8180 JBOSS_HOME/server/default/conf/jboss-service.xml: Changed 8083 to 8183 Changed 1099 to 1299 Changed 1098 to 1298 Changed 4444 to 4644 Changed 4445 to 4645 Changed 4446 to 4646 Changed 4447 to 4647 JBOSS_HOME/server/default/conf/jboss-minimal.xml: Changed 1099 to 1299 Changed 1098 to 1298 When I start the server (binding to localhost) everything is fine and I'm able to access the application. But when I try to shutdown the server I get the following error: Exception in thread "main" javax.naming.CommunicationException: Could not obtain connection to any of these urls: localhost [Root exception is javax.naming.CommunicationException : Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]]] at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1562) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:634) at org.jnp.interfaces.NamingContext.lookup(NamingContext.java:627) at javax.naming.InitialContext.lookup(InitialContext.java:392) at org.jboss.Shutdown.main(Shutdown.java:214) Caused by: javax.naming.CommunicationException: Failed to connect to server localhost:1099 [Root exception is javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused]] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:274) at org.jnp.interfaces.NamingContext.checkRef(NamingContext.java:1533) ... 4 more Caused by: javax.naming.ServiceUnavailableException: Failed to connect to server localhost:1099 [Root exception is java.net.ConnectException: Connection refused] at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:248) ... 5 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.(Socket.java:372) at java.net.Socket.(Socket.java:273) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:84) at org.jnp.interfaces.TimedSocketFactory.createSocket(TimedSocketFactory.java:77) at org.jnp.interfaces.NamingContext.getServer(NamingContext.java:244) ... 5 more Is there any other file that I need to change the 1099 to 1299, or am I missing some other step?

    Read the article

  • How do I configure a site in IIS 7 for SSL with a wildcard certificate?

    - by michielvoo
    We have an Windows 2008 server with IIS 7 to test sites we develop for our clients. Each site has a binding on a subdomain: clienta.example.com clientb.example.com clientc.example.com (* Using example.com to protect the innocent) For one of these sites we now have to test if it works over https. So I have created a wildcard certificate request with *.example.com as the common name. I have received the certificate (issued by PositiveSSL SA) and completed the request. The certificate is now installed in IIS. Now I have added an https binding to the second site with the following settings: type: https IP address: All Unassigned Port: 443 Host name: clientb.example.com SSL certificate: *.example.com Browsing the site over regular http works fine. When I try to browse the site over https I get the following errors (depending on the browser used): Chrome This webpage is not available Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. Firefox Unable to connect Firefox can't establish a connection to the server at clientb.example.com Firebug says Status: Aborted Internet Explorer Internet Explorer cannot display the webpage I have checked Failed Request Tracing, and according to the log the request was completed with status 200. I have run the SSL Diagnostics Tool with the following result: System time: Fri, 04 Mar 2011 14:04:35 GMT Connecting to 192.168.2.95:443 Connected Handshake: 115 bytes sent Handshake: 3877 bytes received Handshake: 326 bytes sent Handshake: 59 bytes received Handshake succeeded Verifying server certificate, it might take a while... Server certificate name: *.example.com Server certificate subject: OU=Domain Control Validated, OU=PositiveSSL Wildcard, CN=*.example.com Server certificate issuer: C=GB, S=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=PositiveSSL CA Server certificate validity: From 2-3-2011 1:00:00 To 2-3-2012 0:59:59 1:00:00 To 2-3-2012 0:59:59 HTTPS request: GET / HTTP/1.0 User-Agent: SSLDiag Accept:*/* HTTPS: 85 bytes of encrypted data sent HTTPS: 533 bytes of encrypted data received Status: HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Mar 2011 14:04:35 GMT Connection: close Content-Length: 315 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Not Found</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD> <BODY><h2>Not Found</h2> <hr><p>HTTP Error 404. The requested resource is not found.</p> </BODY></HTML> HTTPS: server disconnected Final handshake: 37 bytes sent successfully Q: What can I do to make this work?

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • One user sometimes gets an unknown certificate error opening Outlook

    - by Chris
    Let me clarify a little. This isn't an unknown certificate error it's an unknown certificate error in so much as I can't figure out where the certificate comes from. This happens on a Win 7 Enterprise machine connecting to Exchange 2010 with Outlook 2010. The error he gets is that the root is not trusted because it's a self-signed cert. Take a look at this screenshot because even if I had generated this myself I wouldn't have put "SomeOrganizationalUnit" or "SomeCity" or "SomeState", etc. (Red block covers our domain name.) I'm a little concerned this is a symptom of a security breach. Exchange 2010 has three certificates installed but none of them are this certificate. They all have different expiration dates (one is expired) and different meta-data. edit: There are two scenarios that I see the certificate warning and one of them I can reliably repeat. When the user leaves his computer on over night Outlook pops the Security Warning window. I don't know what time this happens. Using Outlook Anywhere if I connect to Exchange externally via a cellular USB modem the Security Warning window will appear every time I close and reopen Outlook. Whether I say Yes or No does not make a difference on whether or not I can connect to Exchange and send/receive email. In other words, I can always connect to Exchange. I've checked my two Exchange servers and my Cisco router for a certificate that matches this one and I can't find it. edit 2: Here is a screenshot of the Security Alert window. (I've been calling it Security Warning... My mistake.) edit 3: I stopped seeing this error several weeks ago but I can't tie it to any single event (because I just sort of realized that warning had stopped showing up) but I think I found the source of the certificate. Last week I found out that the certificate on our website DomainA.com was invalid. I knew that our web admin had installed a valid certificate so when I look into the problem I found out I was being presented with the invalid certificate that this posting is in regards to. The Exchange server's domain is mail.DomainA.com so I can only guess that Outlook was passing this invalid certificate through as it did some kind of check on DomainA.com. This issue is still a mystery because the certificate warning stopped appearing several weeks ago whereas the invalid certificate issue on the website was only fixed last week. It ended up being a problem with the website control panel. The valid certificate was installed but not being served for some reason and instead the self-signed cert was being served.

    Read the article

  • Can't find windows 2000 domain after PDC Change

    - by Mark A Kruger
    This is a windows 2000 domain issue. I had an old win2000 PDC that was beginning to fail. So, trying to be pre-emptive, I installed a new BDC, then "demoted" the old PDC and took it off the network. Now it appears that no member server can "find" the domain anymore. No logins work (for services or a RDP or anything). What I've tried (based on googling): Verified sysvol is shared on all servers. Used nslookup to verify that DC's are being found. netdiag /fix meta data cleanup routines. verified no firewall issues (port 389 etc) seizing all roles to new PDC (I did that as part of the original promotion). LMHOST file and Netbios settings. At the moment it seems like I can get the DC's returned but cannot contact them. I'm at a loss. My latest attempt was to remove a member server from the domain and try to "re-add" it. When I do that I get this message: The query was for the SRV record for _ldap._tcp.dc._msdcs.cfwebtools.com The following domain controllers were identified by the query: db-dev1.cfwebtools.com file-prod1.cfwebtools.com cfwt-pdc2.cfwebtools.com However no domain controllers could be contacted. It then goes on to ask if I've checked my A record and made sure they are running. Is there a way to force this domain to be seen? I also shared sysvol (or double checked it) and restarted the dfsr service. More information. I got looking at sysvol and found it was not shared on 2 of these servers. Only one of them (db-dev1) has a "good" or at least "populated" sys vol store. So I tried doing a "d2" recovery of my PDC against that good sysvol. But it never synchs - or at least it does not seem to synch. I'm guessing if I could get sysvol and netlogin to kick in and replicate that would fix my issue. I think these DC's aren't responding because they are waiting for replication which is broken somehow. Would taking down all the DC's except for db-dev1 fix the issue - at least temporarily? I know I can't just copy the sysvol stuff over to the other 2 can I?

    Read the article

  • JBoss naming service port conflict

    - by Kramer
    I am having trouble getting JBoss started. I am running JBoss 5.1.0 on Mac OSX (yes, I know it is an old version, but that’s what the application is certified on for now). I am using Apple’s JVM 1.6.0_37. I get the following error when trying to use JBoss (there are some more exceptions, but these are the first few): Error installing to Start: name=jboss:service=Naming state=Create mode=Manual requiredState=Installed java.rmi.server.ExportException: Port already in use: 1098; nested exception is: java.net.BindException: Can't assign requested address at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:310) at sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:218) Caused by: java.net.BindException: Can't assign requested address at java.net.PlainSocketImpl.socketBind(Native Method) at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383) 16:57:15,596 ERROR [AbstractKernelController] Error installing to Real: name=vfsfile:/Users/home/server/jboss-5.1.0.GA/server/myserver/conf/jboss-service.xml state=PreReal mode=Manual requiredState=Real org.jboss.deployers.spi.DeploymentException: Error deploying: jboss:service=Naming at org.jboss.deployers.spi.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:49) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:118) at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:46) Caused by: java.rmi.server.ExportException: Port already in use: 1098; nested exception is: java.net.BindException: Can't assign requested address at sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:310) Caused by: java.net.BindException: Can't assign requested address at java.net.PlainSocketImpl.socketBind(Native Method) Now I know what you are thinking, that I am running something that conflicts with that port, but I have used lsof and there is nothing listed on that port. I have tried changing the port in conf/bindingservice.beans/META-INF/bindings-jboss-beans.xml: <bean class="org.jboss.services.binding.ServiceBindingMetadata"> <property name="serviceName">jboss:service=Naming</property> <property name="bindingName">RmiPort</property> <property name="port">5098</property> <property name="description">Socket Naming service uses to receive RMI requests from client proxies</property> </bean> Unfortunately, I then get the name errors with the new port number. I also installed a network monitoring tool on my box and it doesn't look like any ports are being opened when I start jboss, but it is possible, that the tool might be missing a port that is opened and then closed quickly. Any ideas what could be the problem or how to fix it?

    Read the article

  • How to find the process(es) which are hogging the machine

    - by Aaron Digulla
    Scenario: All of a sudden, my computer feels sluggish. Mouse moves but windows take ages to open, etc. uptime says the load is 7.69 and raising. What is the fastest way to find out which process(es) are the cause of the load? Now, "top" and similar tools isn't the answer because they either show CPU or memory usage but not both at the same time. What I need is the single command which I might be able to type as it happens - something that will figure out any of System is trying to swap 8GB of RAM to disk because process X ... or process X seeks all over the disk or process X uses 400% CPU" So what I'm looking for is iostat, htop/atop and similar tools run into one with an output like this: 1235 cp - Disk trashing 87 chrome - Uses 2&nbsp;GB of RAM 137 nfs_bench - Uses 95% of the network bandwidth I don't want a tool that gives me some numbers which I can analyze but a tool that tells me exactly which process causes the current load. Assume that the user in front of the keyboard barely knows how to write "process", but the user is quickly overwhelmed when it comes to "resident size", "virtual memory" or "process life cycle". My argument goes like this: A user notices a problem. There can be thousands of reasons ... well, almost :-) The user wants to know the source of the problem. The current solutions give me lots of numbers, and I need to know what these numbers mean. What I'm looking for is a meta tool. 99% of the data is irrelevant to the problem. So what the tool should do is look for processes which hog some resource and list only those along with "this process needs a lot of CPU, this produces many IRQs, this process allocates a lot of RAM (and it's still growing)". This will be a relatively short list. It will be much more simple for someone new to this to locate the culprit from this list than from the output of, say, htop which gives me about 5000 numbers but requires me to fold multi-threaded processes myself (I have 50 lines which say VIRT 2750M but only 16 GB of RAM - the machine ought to swap itself to death but of course, this is a misinterpretation of the data that can happen quickly).

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • Setting Up My Server to Do DNS On OpenSuse 11.3

    - by adaykin
    Hello, I am attempting to use my server to be a DNS server. I am having trouble getting the domain setup. Here is what I have so far: /var/lib/named/master/andydaykin.com: $TTL 2d @ IN SOA andydaykin.com. root.andydaykin.com. ( 2011011000 ; serial 0 ; refresh 0 ; retry 0 ; expiry 0 ) ; minimum andydaykin.com. IN NS ns1.andydaykin.com. andydaykin.com. IN SOA ns1.andydaykin.com. hostmaster.andydaykin.com. ( @.andydaykin.com. IN NS ns1.andydaykin.com. ns1.andydaykin.com. IN A 204.12.227.33 www.andydaykin.com. IN A 204.12.227.33 /etc/resolve.conf: search andydaykin.com nameserver 204.12.227.33 /etc/named.conf: options { # The directory statement defines the name server's working directory directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; listen-on-v6 { any; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; include "/etc/named.d/forwarders.conf"; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; Include the meta include file generated by createNamedConfInclude. This includes all files as configured in NAMED_CONF_INCLUDE_FILES from /etc/sysconfig/named include "/etc/named.conf.include"; zone "andydaykin.com" in { file "master/andydaykin.com"; type master; allow-transfer { any; }; }; logging { category default { log_syslog; }; channel log_syslog { syslog; }; }; What I am doing wrong?

    Read the article

  • trying to validate user input in php

    - by user225269
    I'm trying to validate user input in php. This code will check if the values are null or not. If it is null, this will require the user to input the values that are null. When all the text boxes in the html form that came before this. This code will show the submit button, and that submit button will save the inputted data into the mysql database. But the problem is that the value that is saved is zero zero and zero, what might be the cause of this? <html> <head> <title>Admission Information Sheet</title> <meta http-equiv="Content-Type" content="text/html; Western (ISO-8859-1)"> <meta name="author" content=" "> <title> <style> input { font-size: 16px;} </style> <?php include('header.php'); ?> <div id="main_content"> </div> <?php include('footer.php'); ?> <table border="1" width="900" border="0" align="left" cellpadding="0" cellspacing="1" bgcolor="#CCCCCC"> <tr> <form name="form1.1" method="POST" action="aisaction.php"> <?php $NURSE = $_POST[nurse]; $TELNUM = $_POST[telnum]; $HOSPNUM = $_POST[hnum]; $ROOMNUM = $_POST[rnum]; $LASTNAME = $_POST[lname]; $FIRSTNAME = $_POST[fname]; $MIDNAME = $_POST[mname]; $AD = $_POST[ad]; $ADATE = $_POST[adate]; $ADTIME = $_POST[adtime]; $CSTAT = $_POST[cs]; $AGE = $_POST[age]; $BDAY = $_POST[bday]; $SEX = $_POST[sex]; ?> <td> <table width="100%" border="0" cellpadding="2" cellspacing="1" bgcolor="#FFFFFF"> <tr> <td colspan="12" style="background:#9ACD32; color:white; border:white 1px solid; text-align: center"><strong><font size="3">ADMISSION INFORMATION SHEET</strong></td> </tr> <tr> </td><br> <td width="54"><font size="3">Hospital #</td> <td width="3">:</td> <td width="168"><input type="display" name="hnum" disabled="true" value= "<?php print "$HOSPNUM";?>"><br> <font color="red"> <?php if(empty($HOSPNUM)) print "* Hospital Number required!<br>"; ?> </td> <td width="41"><font size="3">Room #</td> <td width="3">:</td> <td width="168"><input type="display" name="rnum" disabled="true" value= "<?php print "$ROOMNUM";?>"><br> <font color="red"> <?php if(empty($ROOMNUM)) print "* Room Number required!<br>"; ?> </td> <td width="67"><font size="3">Admission Date</td> <td width="3">:</td> <td width="168"><input type="display" name="adate" disabled="true" value= "<?php print "$ADATE";?>"><br> <font color="red"> <?php if(empty($ADATE)) print "* Admission Date required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Last Name</td> <td>:</td> <td><input type="display" name="lname" disabled="true" value= "<?php print "$LASTNAME";?>"><br> <font color="red"> <?php if(empty($LASTNAME)) print "* Last Name required!<br>"; ?> </td> <td><font size="3">First Name</td> <td>:</td> <td><input type="display" name="fname" disabled="true" value= "<?php print "$FIRSTNAME";?>"><br> <font color="red"> <?php if(empty($FIRSTNAME)) print "* First Name required!<br>"; ?> </td> <td><font size="3">Middle Name</td> <td>:</td> <td><input type="display" name="mname" disabled="true" value= "<?php print "$MIDNAME";?>"><br> <font color="red"> <?php if(empty($MIDNAME)) print "* Middle Name required!<br>"; ?> </td> <td><font size="3">Admit time</td> <td>:</td> <td><input type="display" name="mname" disabled="true" value= "<?php print "$ADTIME";?>"><br> <font color="red"> <?php if(empty($ADTIME)) print "* Adtime required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Civil Status</td> <td>:</td> <td><input type="display" name="cs" disabled="true" value= "<?php print "$CSTAT";?>"><br> <font color="red"> <?php if(empty($CSTAT)) print "* Civil Status required!<br>"; ?> </td> <td><font size="3">Age</td> <td>:</td> <td><input type="display" name="age" disabled="true" value= "<?php print "$AGE";?>"><br> <font color="red"> <?php if(empty($AGE)) print "* Age required!<br>"; ?> </td> <td><font size="3">Birthday</td> <td>:</td> <td><input type="display" name="bday" disabled="true" value= "<?php print "$BDAY";?>"><br> <font color="red"> <?php if(empty($BDAY)) print "* Birthday required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Address</td> <td>:</td> <td><input type="display" name="address" disabled="true" value= "<?php print "$AD";?>"><br> <font color="red"> <?php if(empty($AD)) print "* Address required!<br>"; ?> </td> <td><font size="3">Telephone #</td> <td>:</td> <td><input type="display" name="telnum" disabled="true" value= "<?php print "$TELNUM";?>"></td> <td width="23"><font size="3">Sex</td> <td width="3">:</td> <td width="174"><input type="display" name="sex" disabled="true" value= "<?php print "$SEX";?>"><br> <font color="red"> <?php if(empty($SEX)) print "* Gender required!<br>"; ?> </td> </tr> <tr> <td><font size="3">Pls. Check</td> <td>:</td> <td><input name="stats1" type="checkbox" id="SSS" value="SSS">SSS</td> <td><font size="3"></td> <td>:</td> <td><input name="stats1" type="checkbox" id="nonmed" value="NonMedicare">Non Medicare</td> <td><font size="3"></td> <td>:</td> <td><input name="stats1" type="checkbox" id="sh" value="stockholder">Stockholder</td> </tr> <tr> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="gsis" value="GSIS">GSIS</td> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="senior" value="seniorcitizen">Senior-Citizen</td> <tr> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="dep" value="dependent">Dependent</td> <td><font size="3"></td> <td></td> <td><input name="stats1" type="checkbox" id="emp" value="employee">Employee</td> </tr> <tr> <td><font size="3">Attending Nurse</td> <td>:</td> <td><input type="display" name="nurse" disabled="true" value= "<?php print "$NURSE";?>"><br> <font color="red"> <?php if(empty($NURSE)) print "* Admitting/Attending Nurse required!<br>"; ?> </td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td><input type="button" value="Back" onClick="history.go(-1);return true;"> <?php $val1 = $_POST['NURSE']; if($_POST['NURSE'] !="") { ?> <form action="aisaction.php" method="POST" target="_window"> <input type="hidden" name="submit" value="yes"> <input type="submit" value="submit"> </form> <?php } ?> </td> </td> </tr> </table> </td> </form> </tr> </table> </head> </html>

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >