Search Results

Search found 912 results on 37 pages for 'massive'.

Page 34/37 | < Previous Page | 30 31 32 33 34 35 36 37  | Next Page >

  • The Partner Perspective from Oracle OpenWorld 2012 - IDC’s Darren Bibby report

    - by Richard Lefebvre
    Below is IDC’s Darren Bibby report on ‘The Partner Perspective from Oracle OpenWorld 2012’. If you missed the 2012 edition, I trust this will give you the willingness to attend next year one! October 26, 2012 I attended my fourth Oracle OpenWorld earlier in October. I always go in with the lens of, "What's in it for partners this year?" Although it's primarily thought of as a customer event - and yes, the bulk of the almost 50,000 attendees are customers - this year's conference was clearly the largest and most important partner event Oracle has ever run. Oracle PartnerNetwork (OPN) Exchange There were more partner attendees than ever, with Oracle citing somewhere around 5000. But the format for partners this year was different. And it was better. Traditionally, Oracle hosts a one-day only Partner Forum on the Sunday before the customer-focused conference begins. This year, the partner content still began on the Sunday, but the worldwide alliances and channels group created an exclusive track throughout the week, just for partners. It featured content specifically targeted towards partners, and was anchored at a nearby hotel. This was a great move for Oracle. The Oracle PartnerNetwork (OPN) team has been in a tricky position for years in that they have enough partners that they need a landmark event in the year, but perhaps not enough to justify a separate, worldwide, large, partner-only event. Coinciding a four day event with Oracle OpenWorld, where anybody who's anybody in the Oracle world attends anyway, is a good solution. The channels leadership team can build from this success for an even better conference next year. It's expected that they will follow a similar strategy. Cloud Announcements for Partners As for the content, it was primarily about the Cloud. For customers, for VARs, for ISVs, for everyone. There were five key Cloud related announcements for partners at the event: Cloud Builder Specialization. This is one of the first broader Specializations that isn't focused on one unique product. It is a designation for partners that offer design and implementation services for private cloud solutions. As such, it will surely be something that nearly every partner will consider, and many will pursue. New Specializations for Cloud Services. Unlike the broad, almost "strategy-level" Specialization above, there are a group of new product-based "merit badges" for many of the new Cloud offerings. Think about a Specialization for the Cloud version of HCM, for instance. Each of these particular specializations will also have Rapid Start implementation methodologies that allow a partner to offer a fixed scope and fixed price bid to customers. Based on the learnings from Oracle Consulting, this means a partner might be able to deliver Cloud HCM in six weeks for a fixed price. In the end, this means more consistent experiences for Oracle customers. Cloud Resale Program. For those partners who achieve one of these Cloud Specializations, it will mean they can actually resell the subscription-based Cloud product. This is important because it has been somewhat of a rarity in the emerging Cloud channel for partners to be able to "take the paper", take the revenue, do the billing, be first line of support etc. This is an important step for Oracle and one the partners will be happy to see. Cloud Referral Program. For those partners who are not as engaged with these specific Cloud products that the Specializations revolve around, there is a new referral program that provides an incentive to recommend Oracle Cloud products. This one-two punch of referral and resale programs is similar in many ways to other vendors who allow more committed partners to resell, while more casual partners can collect fees. It's the model that seems to work. The key to allow a company to resell a subscription product - something that is inherently delivered directly between the vendor and customer - is trust. Achieving a specialization is a good bar to have to meet. Platform as a Service for ISVs. Leveraging some of the overall announcements made by CEO Larry Ellison around a cloud version of its famous database, Oracle also outlined a new ability for ISVs to build cloud services on its new PaaS offering. Details were less available for this announcement, though it's an expected and fitting play for ISVs comfortable with Oracle technology who can now more easily build out cloud applications. There wasn't much talk of an app store to go along with this, but surely it's in the works. Specializations And "The Gap" Coming back to Specializations, Oracle PartnerNetwork (OPN) has 4600 partners worldwide that hold 20,000 Specializations. These are impressive numbers just three years into the new OPN framework. The actual number of Specializations has also grown significantly, up to 111 today and soon around 125 or so with the new Cloud designations. Oracle may need to look at grouping some of these and creating higher level, broader designations that partners could achieve by earning several Specializations in that group. At 125 and growing, this is a lot. On the top of the pyramid, Hitachi Ltd. successfully became the eleventh partner to make it to the highly prestigious Diamond level. Partner programs partially exist in order to recognize capable partners. And it's more than abundantly clear that the Diamond level does this. But I think Oracle has a gap. Specializations show capability in a very specific product area, and all sizes of partners can achieve these. The next level at which to show a level of expertise is the Advanced Specialization. However, this is a massive step up from the regular Specialization. The advanced level requires 50 people to have certification in that particular product area. Most other industry programs have similar higher level statuses, but none are even close to that number. Whereas a customer who sees an Oracle partner with an advanced specialization can be very sure of capability, there is a gap in that there are hundreds or even thousands of 20-50 person solution providers who are top notch in their area of expertise. They will never get to Advanced due to numbers alone. These boutique partners don't really have a way of showing off their talents in the current program. Advanced may not need to be so high to really show that a company has deep expertise. Overall it was a very successful Oracle OpenWorld for Oracle partners of all sizes. There was progress made on making it a bigger and more relevant event. And also on catching up and maybe even leading in some cases with cloud opportunities for partners.

    Read the article

  • Windows Azure Evolution &ndash; Welcome to VS2012

    - by Shaun
    When the Microsoft released the first preview version of Windows 8 and Visual Studio, many people in the community were asking if the windows azure tool is available to it. The answer was “NO”. Microsoft promised that the windows azure tool will only support the Visual Studio 2010 but when the 2012 was final released, windows azure tool should be work. But now alone with the new windows azure platform was published we got the latest Windows Azure SDK 1.7, which is compatible to the Visual Studio 2012 RC.   You can retrieve the latest version of the Windows Azure SDK through Web Platform Installer, which I think it’s the easiest and simplest way to download and install, since besides the SDK itself it also needs some other components. To download the latest windows azure SDK from Web Platform Installer, just go to the windows azure website and clicked the Develop, .NET and click the blue “install” button. Then you need to select which version of Visual Studio you want to use, Visual Studio 2010 or Visual Studio 2012 RC. After selected the current version you will download an EXE file. This file will lead you to install the Web Platform Installer 4.0 (if you haven’t installed) and the latest windows azure SDK. You can see the version name is June 2012, 1.7. Finally the WebPI will detect the dependent components you need to download and begin to install. But if you want to challenge yourself you can download the components and install them manually. The standalone installations are listed in this page with the instruction on how to install them with necessary pre-requirements.   Once you finished the installation you can open the Visual Studio 2012 RC and as usual, it need to be run as administrator. If you clicked the New Project link from the start page, navigated to Cloud category you will find that there no project template available. Is there anything wrong? So, if you changed the target framework from the default .NET 4.5 to .NET 4 you will see the azure project template. This is because, currently the windows azure instance does not support .NET 4.5. After clicked OK you will see the role creation window, which is similar as what you have seen before. But there are some new role templates in this SDK. Firstly you will have ASP.NET MVC 4 web role available, which means you can create ASP.NET MVC 4 applications for internet, intranet, mobile and WebAPI on the cloud. Then there are two new worker role templates, “Cache Worker Role” and “Worker Role with Service Bus Queue”. “Worker Role with Service Bus Queue” is a worker role which had added necessary references to access the Windows Azure Service Bus Queue. It also have some basic sample code in the worker role class which could read messages from the queue when started. The “Cache Worker Role” is a worker role which has the in-memory distributed cache feature enabled by default. This feature is different than the Windows Azure Caching. It allows the role instance to use its memory as a in-memory distributed cache clusters. By using this feature you can have one or more worker roles as some dedicate cache clusters. Alternatively, you can make part of your web role and worker role’s memory as the cache clusters as well. Let’s just create an ASP.NET MVC 4 Web Role, and click F5 to run it under the local emulator. If you have been working with azure for a while you should know that I need to setup the local storage emulator before running locally if it’s a fresh azure SDK installation. But in this version when we started our azure project the Visual Studio will check if the storage emulator had been initialized. If not, it will run the initializer automatically. And as you can see, in this version the storage emulator relies on the SQL Server 2012 Local DB feature. It will create the emulator database and tables in the default local database. You can set the storage emulator to use a standard SQL Server default instance by using the command “dsinit /instance:.”. The “dsinit” tool now is located at %PROGRAM FILES%\Microsoft SDKs\Windows Azure\Emulator\devstore After the Visual Studio complied and deployed the package our website should be shown in the browser. This is the MVC 4 Web Role home page on my Windows 8 machine in IE10. Another thing you might notice is that, in this version the compute emulator utilizes IIS Express to host the web roles instead of the full IIS. You can add breakpoint in the code and debug, and you can use the local storage emulator to test your code for accessing the storage service. All of them are same as what your are doing now on SDK 1.6. You can switch to use IIS to run your web role in local emulator. Just open the windows azure porject property windows, in the Web page select “Use IIS Web Server”. For more information about this please have a look on Nuno’s blog post. In the role property page in Visual Studio there’s no massive changes. You can configure your role settings such as the endpoints, certificates and local storage, etc.. One thing was added is the Caching tab. Here you can specify enable the caching feature or not, and how much memory you want to use as the cache cluster. I will introduce more details about it in the future posts. The publish and package feature are also no change. You can publish your project to azure directly through Visual Studio 2012, while you can create the package and upload manually. Below is the SDK version of my deployment which is 1.7.30602.1703 in the developer portal.   Summary In this post I introduced about the new Windows Azure SDK 1.7 especially on how it works on the latest Visual Studio 2012 RC. There’s no significant changes in the visual studio tool in this version but some small enhancement such as ASP.NET MVC 4, Cache Worker Role, using SQL 2012 Local DB and IIS Express, etc..   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • Wireless AAA for a small, bandwidth-limited hotel.

    - by Anthony Hiscox
    We (the tech I work with and myself) live in a remote northern town where Internet access is somewhat of a luxury, and bandwidth is quite limited. Here, overage charges ranging from few hundreds, to few thousands of dollars a month, is not uncommon. I myself incur regular monthly charges just through my regular Internet usage at home (I am allowed 10G for $60CAD!) As part of my work, I have found myself involved with several hotels that are feeling this. I know that I can come up with something to solve this problem, but I am relatively new to system administration and I don't want my dreams to overcome reality. So, I pass these ideas on to you, those with much more experience than I, in hopes you will share some of your thoughts and concerns. This system must be cost effective, yes the charges are high here, but the trust in technology is the lowest I've ever seen. Must be capable of helping client reduce their usage (squid) Allow a limited (throughput and total usage) amount of free Internet, as this is often franchise policy. Allow a user to track their bandwidth usage Allow (optional) higher speed and/or usage for an additional charge. This fee can be obtained at the front desk on checkout and should not require the use of PayPal or Credit Card. Unfortunately some franchises have ridiculous policies that require the use of a third party remote service to authenticate guests to your network. This means WPA is out, and it also means that I do not auth before Internet usage, that will be their job. However, I do require the ABILITY to perform authentication for Internet access if a hotel does not have this policy. I will still have to track bandwidth (under a guest account by default) and provide the same limiting, however the guest often will require a complete 'unlimited' access, in terms of existence, not throughput. Provide firewalling capabilities for hotels that have nothing, Office, and Guest network segregation (some of these guys are running their office on the guest network, with no encryption, and a simple TOS to get on!) Prevent guests from connecting to other guests, however provide a means to allow this to happen. IE. Each guest connects to a page and allows the other guest, this writes a iptables rule (with python-netfilter) and allows two rooms to play a game, for instance. My thoughts on how to implement this. One decent box (we'll call it a router now) with a lot of ram, and 3 NIC's: Internet Office Guests (AP's + In Room Ethernet) Router Firewall Rules Guest can talk to router only, through which they are routed to where they need to go, including Internet services. Office can be used to bridge Office to Internet if an existing solution is not in place, otherwise, it simply works for a network accessible web (webmin+python-webmin?) interface. Router Software: OpenVZ provides virtualization for a few services I don't really trust. Squid, FreeRADIUS and Apache. The only service directly accessible to guests is Apache. Apache has mod_wsgi and django, because I can write quickly using django and my needs are low. It also potentially has the FreeRADIUS mod, but there seems to be some caveats with this. Firewall rules are handled on the router with iptables. Webmin (or a custom django app maybe) provides abstracted control over any features that the staff may need to access. Python, if you haven't guessed it's the language I feel most comfortable in, and I use it for almost everything. And finally, has this been done, is it a overly massive project not worth taking on for one guy, and/or is there some tools I'm missing that could be making my life easier? For the record, I am fairly good with Python, but not very familiar with many other languages (I can struggle through PHP, it's a cosmetic issue there). I am also an avid linux user, and comfortable with config files and command line. Thank you for your time, I look forward to reading your responses. Edit: My apologies if this is not a Q&A in the sense that some were expecting, I'm just looking for ideas and to make sure I'm not trying to do something that's been done. I'm looking at pfSense now as a possible start for what I need.

    Read the article

  • Blogging tips for SQL Server professionals

    - by jamiet
    For some time now I have been intending to put some material together relating my blogging experiences since I began blogging in 2004 and that led to me submitting a session for SQLBits recently where I intended to do just that. That didn’t get enough votes to allow me to present however so instead I resolved to write a blog post about it and Simon Sabin’s recent post Blogging – how do you do it? has prompted me to get around to completing it. So, here I present a compendium of tips that I’ve picked up from authoring a fair few blog posts over the past 6 years. Feedburner Feedburner.com is a service that can consume your blog’s default RSS feed and provide another, replacement, feed that has exactly the same content. You can then supply that replacement feed on your blog site for other people to consume in their RSS readers. Why would you want to do this? Well, two reasons actually: It makes your blog portable. If you ever want to move your blog to a different URL you don’t have to tell your subscribers to move to a different feed. The feedburner feed is a pointer to your blog content rather than being a copy of it. Feedburner will collect stats telling you how many people are subscribed to your feed, which RSS readers they use, stuff like that. Here’s a sample screenshot for http://sqlblog.com/blogs/jamie_thomson/: It also tells you what your most viewed posts are: Web stats like these are notoriously inaccurate but then again the method of measurement here is not important, what IS important is that it gives you a trustworthy ranking of your blog posts and (in my opinion) knowing which are your most popular posts is more important than knowing exactly how many views each post has had. This is just the tip of the iceberg of what Feedburner provides and I recommend every new blogger to try it! Monitor subscribers using Google Reader If for some reason Feedburner is not to your taste or (more likely) you already have an established RSS feed that you do not want to change then Google provide another way in which you can monitor your readership in the shape of their online RSS reader, Google Reader. It provides, for every RSS feed, a collection of stats including the number of Google Reader users that have subscribed to that RSS feed. This is really valuable information and in fact I have been recording this statistic for mine and a number of other blogs for a few years now and as such I can produce the following chart that indicates how readership is trending for those blogs over time: [Good news for my fellow SQLBlog bloggers.] As Stephen Few readily points out, its not the numbers that are important but the trend. Search Engine Optimisation (SEO) SEO (or “How do I get my blog to show up in Google”) is a massive area of expertise which I don’t want (and am unable) to cover in much detail here but there are some simple rules of thumb that will help: Tags – If your blog engine offers the ability to add tags to your blog post, use them. Invariably those tags go into the meta section of the page HTML and search engines lap that stuff up. For example, from my recent post Microsoft publish Visual Studio 2010 Database Project Guidance: Title – Search engines take notice of web page titles as well so make them specific and descriptive (e.g. “Configuring dtsConfig connection strings”) rather than esoteric and meaningless in a vain attempt to be humorous (e.g. “Last night a DJ saved my ETL batch”)! Title(2) – Make your title even more search engine friendly by mentioning high level subject areas, not dissimilar to Twitter hashtags. For example, if you look at all of my posts related to SSIS you will notice that nearly all contain the word “SSIS” in the title even if I had to shoehorn it in there by putting it in square brackets or similar. Another tip, if you ARE putting words into your titles in this artificial manner then put them at the end so that they’re not that prominent in search engine results; they’re there for the search engines to consume, not for human beings. Images – Always add titles and alternate text (ALT attribute) to images in your blog post. If you use Windows 7 or Windows Vista then you can use Live Writer (which Simon recommended) makes this easy for you. Headings – If you want to highlight section headings use heading tags (e.g. <H1>, <H2>, <H3> etc…) rather than just formatting the text appropriately – again, Live makes this easy. These tags give your blog posts structure that is understood by search engines and RSS readers alike. (I believe it makes them more amenable to CSS as well – though that’s not something I know too much about). If you check the HTML source for the blog post you’re reading right now you’ll be able to scan through and see where I have used heading tags. Microsoft provide a free tool called the SEO Toolkit that will analyse your blog site (for free) and tell you what things you should change to improve SEO. Go read more and download for free at Search Engine Optimization Toolkit. Did I mention that it was free? Miscellaneous Tips If you are including code in your blog post then ensure it is formatted correctly. Use SQL Server Central’s T-SQL prettifier for formatting T-SQL code. Use images and videos. Personally speaking there’s nothing I like less when reading a blog than paragraph after paragraph of text. Images make your blog more appealing which means people are more likely to read what you have written. Be original. Don’t plagiarise other people’s content and don’t simply rewrite the contents of Books Online. Every time you publish a blog post tweet a link to it. Include hashtags in your tweet that are more likely to grab people’s attention. That’s probably enough for now - I hope this blog post proves useful to someone out there. If you would appreciate a related session at a forthcoming SQLBits conference then please let me know. This will likely be my last blog post for 2010 so I would like to take this opportunity to thank everyone that has commented on, linked to or read any of my blog posts in that time. 2011 is shaping up to be a very interesting for SQL Server observers with the impending release of SQL Server code-named Denali and I promise I’ll have lots more content on that as the year progresses. Happy New Year. @Jamiet

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • CodePlex Daily Summary for Sunday, June 06, 2010

    CodePlex Daily Summary for Sunday, June 06, 2010New ProjectsActive Worlds Dot Net Wrapper (Based on AwSdk): Active Worlds Dot Net Wrapper (Based on AwSdk)Combina: Smart calculator for large combinatorial calculations.Concurrent Cache: ConcurrentCache is a smart output cache library extending OutputCacheProvider. It consists of in memory, cache files and compressed files modes and...Decay: Personal use. For learningFazTalk: FazTalk is a suite of tools and products that are designed to improve collaboration and workflow interactions. FazTalk takes an innovative approach...grouped: A peer to peer text editor, written in C# [update] I wrote this little thing a while back and even forgot about it, I stopped coding for more tha...HitchARide MVC 2 Sample: An MVC 2 sample written as part of the Microsoft 2010 London Web Camp based on the wireframes at http://schematics.earthware.co.uk/hitcharide. Not...Inspiration.Web: Description: A simple (but entertaining) ASP.NET MVC (C#) project to suggest random code names for projects. Intended audience: People who ne...NetFileBrowser - TinyMCE: tinyMCE file plugin with asp.netOil Slick Live Feeds: All live feeds from BP's Remotely Operated VehiclesParticle Lexer: Parser and Tokenizer libraryPdf Form Tool: Pdf Form Tool demonstrates how the iTextSharp library could be used to fill PDF forms. The input data is provided as a csv file. The application ...Planning Poker Windows Mobile 7: This project is a Planning Poker application for Windows Mobile 7 (and later?). RandomRat: RandomRat is a program for generating random sets that meet specific criteriaScience.NET: A scientific library written in managed code. It supports advanced mathematics (algebra system, sequences, statistics, combinatorics...), data stru...Spider Compiler: Spider Compiler parses the input of a spider programming source file and compiles it (with help of csc.exe; the C#-Compiler) to an exe-file. This p...Sununpro: sunun's project for study by team foundation server.TFS Buddy: An application that manipulates your I-Buddy whenever something happens in your Team Foundation ServerValveSoft: ValveSysWiiMote Physics: WiiMote Physics is an application that allows you to retrieve data from your WiiMote or Balance Board and display it in real-time. It has a number...WinGet: WinGet is a download manager for Windows. You can drag links onto the WinGet Widget and it will download a file on the selected folder. It is dev...XProject.NET: A project management and team collaboration platformNew Releases.NET DiscUtils: Version 0.9 Preview: This release is still under development. New features available in this release: Support for accessing short file names stored in WIM files Incr...Active Worlds Dot Net Wrapper (Based on AwSdk): Active World Dot Net Wrapper (0.0.1.85): Based on AwSdk 85AwSdk UnOfficial Wrapper Howto Use: C# using AwWrapper; VB.Net Import AwWrapperAjaxControlToolkit additional extenders: ZhecheAjaxControls for .NET3.5: Used AJAX Control Toolkit Release Notes - April 12th 2010 Release Version 40412. Fixed deadlock in long operation canceling Some other fixesAnyCAD: AnyCAD.v1.2.ENU.Install: http://www.anycad.net Parametric Modeling *3D: Sphere, Box, Cylinder, Cone •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extr...Community Forums NNTP bridge: Community Forums NNTP Bridge V29: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Concurrent Cache: 1.0: This is the first release for the ConcurrentCache library.Configuration Section Designer: 2.0.0: This is the first Beta Release for VS 2010 supportDoxygen Browser Addin for VS: Doxygen Browser Addin - v0.1.4 Beta: Support for Visual Studio 2010 improved the logging of errors (Event Logs) Fixed some issues/bugs Hot key for navigation "Control + F1, Contr...Folder Bookmarks: Folder Bookmarks 1.6.2: The latest version of Folder Bookmarks (1.6.2), with new UI changes. Once you have extracted the file, do not delete any files/folders. They are n...HERB.IQ: Beta 0.1 Source code release 5: Beta 0.1 Source code release 5Inspiration.Web: Initial release (deployment package): Initial release (deployment package)NetFileBrowser - TinyMCE: Demo Project: Demo ProjectNetFileBrowser - TinyMCE: NetFileBrowser: NetImageBrowserNLog - Advanced .NET Logging: Nightly Build 2010.06.05.001: Changes since the last build:2010-06-04 23:29:42 Jarek Kowalski Massive update to documentation generator. 2010-05-28 15:41:42 Jarek Kowalski upda...Oil Slick Live Feeds: Oil Slick Live Feeds 0.1: A the first release, with feeds from the MS Skandi, Boa Deep C, Enterprise and Q4000. They are live streams from the ROV's monitoring the damaged...Pcap.Net: Pcap.Net 0.7.0 (46671): Pcap.Net - June 2010 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes...sqwarea: Sqwarea 0.0.289.0 (alpha): API supportTFS Buddy: TFS Buddy First release (Beta 1): This is the first release of the TFS Buddy.Visual Studio DSite: Looping Animation (Visual C++ 2008): A solider firing a bullet that loops and displays an explosion everytime it hits the edge of the form.WiiMote Physics: WiiMote Physics v4.0: v4.0.0.1 Recovered from existing compiled assembly after hard drive failure Now requires .NET 4.0 (it seems to make it run faster) Added new c...WinGet: Alpha 1: First Alpha of WinGet. It includes all the planned features but it contains many bugs. Packaged using 7-Zip and ClickOnce.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSIonics Isapi Rewrite FilterStyleCopsmark C# LibraryFarseer Physics Enginepatterns & practices: Composite WPF and Silverlight

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • rsyslogd not monitoring all files

    - by Tom O'Connor
    So.. I've installed Logstash, and instead of using the logstash shipper (because it needs the JVM and is generally massive), I'm using rsyslogd with the following configuration. # Use traditional timestamp format $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat $IncludeConfig /etc/rsyslog.d/*.conf # Provides kernel logging support (previously done by rklogd) $ModLoad imklog # Provides support for local system logging (e.g. via logger command) $ModLoad imuxsock # Log all kernel messages to the console. # Logging much else clutters up the screen. #kern.* /dev/console # Log anything (except mail) of level info or higher. # Don't log private authentication messages! *.info;mail.none;authpriv.none;cron.none;local6.none /var/log/messages # The authpriv file has restricted access. authpriv.* /var/log/secure # Log all the mail messages in one place. mail.* -/var/log/maillog # Log cron stuff cron.* /var/log/cron # Everybody gets emergency messages *.emerg * # Save news errors of level crit and higher in a special file. uucp,news.crit /var/log/spooler # Save boot messages also to boot.log local7.* /var/log/boot.log In /etc/rsyslog.d/logstash.conf there are 28 file monitor blocks using imfile $ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages $InputFileName /var/log/rabbitmq/startup_err $InputFileTag rmq-err: $InputFileStateFile state-rmq-err $InputFileFacility local6 $InputRunFileMonitor .... $InputFileName /var/log/some.other.custom.log $InputFileTag cust-log: $InputFileStateFile state-cust-log $InputFileFacility local6 $InputRunFileMonitor .... *.* @@10.90.0.110:5514 There are 28 InputFileMonitor blocks, each monitoring a different custom application logfile.. If I run [root@secret-gm02 ~]# lsof|grep rsyslog rsyslogd 5380 root cwd DIR 253,0 4096 2 / rsyslogd 5380 root rtd DIR 253,0 4096 2 / rsyslogd 5380 root txt REG 253,0 278976 1015955 /sbin/rsyslogd rsyslogd 5380 root mem REG 253,0 58400 1868123 /lib64/libgcc_s-4.1.2-20080825.so.1 rsyslogd 5380 root mem REG 253,0 144776 1867778 /lib64/ld-2.5.so rsyslogd 5380 root mem REG 253,0 1718232 1867780 /lib64/libc-2.5.so rsyslogd 5380 root mem REG 253,0 23360 1867787 /lib64/libdl-2.5.so rsyslogd 5380 root mem REG 253,0 145872 1867797 /lib64/libpthread-2.5.so rsyslogd 5380 root mem REG 253,0 85544 1867815 /lib64/libz.so.1.2.3 rsyslogd 5380 root mem REG 253,0 53448 1867801 /lib64/librt-2.5.so rsyslogd 5380 root mem REG 253,0 92816 1868016 /lib64/libresolv-2.5.so rsyslogd 5380 root mem REG 253,0 20384 1867990 /lib64/rsyslog/lmnsd_ptcp.so rsyslogd 5380 root mem REG 253,0 53880 1867802 /lib64/libnss_files-2.5.so rsyslogd 5380 root mem REG 253,0 23736 1867800 /lib64/libnss_dns-2.5.so rsyslogd 5380 root mem REG 253,0 20768 1867988 /lib64/rsyslog/lmnet.so rsyslogd 5380 root mem REG 253,0 11488 1867982 /lib64/rsyslog/imfile.so rsyslogd 5380 root mem REG 253,0 24040 1867983 /lib64/rsyslog/imklog.so rsyslogd 5380 root mem REG 253,0 11536 1867987 /lib64/rsyslog/imuxsock.so rsyslogd 5380 root mem REG 253,0 13152 1867989 /lib64/rsyslog/lmnetstrms.so rsyslogd 5380 root mem REG 253,0 8400 1867992 /lib64/rsyslog/lmtcpclt.so rsyslogd 5380 root 0r REG 0,3 0 4026531848 /proc/kmsg rsyslogd 5380 root 1u IPv4 1200589517 0t0 TCP 10.10.10.90 t:40629->10.10.10.90:5514 (ESTABLISHED) rsyslogd 5380 root 2u IPv4 1200589527 0t0 UDP *:45801 rsyslogd 5380 root 3w REG 253,3 17999744 2621483 /var/log/messages rsyslogd 5380 root 4w REG 253,3 13383 2621484 /var/log/secure rsyslogd 5380 root 5w REG 253,3 7180 2621493 /var/log/maillog rsyslogd 5380 root 6w REG 253,3 43321 2621529 /var/log/cron rsyslogd 5380 root 7w REG 253,3 0 2621494 /var/log/spooler rsyslogd 5380 root 8w REG 253,3 0 2621495 /var/log/boot.log rsyslogd 5380 root 9r REG 253,3 1064271998 2621464 /var/log/custom-application.monolog.log rsyslogd 5380 root 10u unix 0xffff81081fad2e40 0t0 1200589511 /dev/log You can see that there are nowhere near 28 logfiles actually being read. I really had to get one file monitored, so I moved it to the top, and it picked it up, but I'd like to be able to monitor all 28+ files, and not have to worry. OS is Centos 5.5 Kernel 2.6.18-308.el5 rsyslogd 3.22.1, compiled with: FEATURE_REGEXP: Yes FEATURE_LARGEFILE: Yes FEATURE_NETZIP (message compression): Yes GSSAPI Kerberos 5 support: Yes FEATURE_DEBUG (debug build, slow code): No Atomic operations supported: Yes Runtime Instrumentation (slow code): No Questions: Why is rsyslogd only monitoring a very small subset of the files? How can I fix this so that all the files are monitored?

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • XFS disk becomes unavailable after a while

    - by Guard
    Ubuntu 12.04 (but the same was on 11.10 before upgrading) WD MyBook, 2TB, no RAID (or RAID0, not completely sure, anyway no mirroring, both 1TB disks are in use, mounted as a single device). Formatted to XFS, normally used for big movie files. Connected to Firewire 800. At some point the LED started going up and down as when constantly reading/writing. The device gives access error. When unplugged (cable, then holding the power button for a while, then unplugging the power) and re-connected becomes available. xfs_check with no results. xfs_repair did something, but looks like didn't fix any error. Then after a massive read (checking 1.5GB torrent file for integrity) becomes unavailable again. Any ideas what's wrong? Drives? Cables? Motherboard? OS? UPD: not sure how relevant this is, but here are dmesg output [14380.632816] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled [14380.633356] SGI XFS Quota Management subsystem [14421.812220] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.890596] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14441.896858] firewire_core: phy config: card 0, new root=ffc1, gap_count=5 [14453.895347] firewire_core: created device fw1: GUID 0090a99500a35518, S400, 9 config ROM retries [14453.904818] scsi6 : SBP-2 IEEE-1394 [14453.905014] scsi7 : SBP-2 IEEE-1394 [14454.139993] firewire_sbp2: fw1.0: logged in to LUN 0000 (0 retries) [14454.158769] scsi 6:0:0:0: Direct-Access WD My Book 1015 PQ: 0 ANSI: 4 [14454.159251] sd 6:0:0:0: Attached scsi generic sg3 type 0 [14454.162391] firewire_sbp2: fw1.1: logged in to LUN 0001 (0 retries) [14454.167453] sd 6:0:0:0: [sdc] 3907017568 512-byte logical blocks: (2.00 TB/1.81 TiB) [14454.178822] sd 6:0:0:0: [sdc] Write Protect is off [14454.178826] sd 6:0:0:0: [sdc] Mode Sense: 10 00 00 00 [14454.186830] scsi 7:0:0:1: Enclosure WD My Book Device 1015 PQ: 0 ANSI: 4 [14454.186995] scsi 7:0:0:1: Attached scsi generic sg4 type 13 [14454.190078] sd 6:0:0:0: [sdc] Cache data unavailable [14454.190087] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.202176] sd 6:0:0:0: [sdc] Cache data unavailable [14454.202185] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.239940] sdc: [mac] sdc1 sdc2 sdc3 sdc4 [14454.271262] sd 6:0:0:0: [sdc] Cache data unavailable [14454.271270] sd 6:0:0:0: [sdc] Assuming drive cache: write through [14454.271354] sd 6:0:0:0: [sdc] Attached SCSI disk [14454.272149] ses 7:0:0:1: Attached Enclosure device [14606.090024] XFS (sdc3): Mounting Filesystem [14612.048343] XFS (sdc3): Starting recovery (logdev: internal) [14620.697636] XFS (sdc3): Ending recovery (logdev: internal) [14748.120957] e1000e: eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx [14748.120963] e1000e 0000:00:19.0: eth0: 10/100 speed: disabling TSO [14752.568382] uhci_hcd 0000:00:1a.0: PCI INT A disabled [14752.568579] uhci_hcd 0000:00:1a.1: PCI INT B disabled [14752.568738] ehci_hcd 0000:00:1a.7: PCI INT C disabled [14752.568779] ehci_hcd 0000:00:1a.7: PME# enabled [14752.584526] uhci_hcd 0000:00:1d.1: PCI INT B disabled [14752.584689] uhci_hcd 0000:00:1d.2: PCI INT C disabled [14752.680079] ehci_hcd 0000:00:1a.7: BAR 0: set to [mem 0xe4641000-0xe46413ff] (PCI address [0xe4641000-0xe46413ff]) [14752.680104] ehci_hcd 0000:00:1a.7: restoring config space at offset 0xf (was 0x300, writing 0x30b) [14752.680136] ehci_hcd 0000:00:1a.7: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900002) [14752.680170] ehci_hcd 0000:00:1a.7: PME# disabled [14752.680182] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.680190] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [14752.710334] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [14752.710342] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [14752.749186] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [14752.749194] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [14752.790231] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 22 (level, low) -> IRQ 22 [14752.790239] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [14752.829170] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [14752.829178] uhci_hcd 0000:00:1d.2: setting latency timer to 64

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • MapReduce in DryadLINQ and PLINQ

    - by JoshReuben
    MapReduce See http://en.wikipedia.org/wiki/Mapreduce The MapReduce pattern aims to handle large-scale computations across a cluster of servers, often involving massive amounts of data. "The computation takes a set of input key/value pairs, and produces a set of output key/value pairs. The developer expresses the computation as two Func delegates: Map and Reduce. Map - takes a single input pair and produces a set of intermediate key/value pairs. The MapReduce function groups results by key and passes them to the Reduce function. Reduce - accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. Typically just zero or one output value is produced per Reduce invocation. The intermediate values are supplied to the user's Reduce function via an iterator." the canonical MapReduce example: counting word frequency in a text file.     MapReduce using DryadLINQ see http://research.microsoft.com/en-us/projects/dryadlinq/ and http://connect.microsoft.com/Dryad DryadLINQ provides a simple and straightforward way to implement MapReduce operations. This The implementation has two primary components: A Pair structure, which serves as a data container. A MapReduce method, which counts word frequency and returns the top five words. The Pair Structure - Pair has two properties: Word is a string that holds a word or key. Count is an int that holds the word count. The structure also overrides ToString to simplify printing the results. The following example shows the Pair implementation. public struct Pair { private string word; private int count; public Pair(string w, int c) { word = w; count = c; } public int Count { get { return count; } } public string Word { get { return word; } } public override string ToString() { return word + ":" + count.ToString(); } } The MapReduce function  that gets the results. the input data could be partitioned and distributed across the cluster. 1. Creates a DryadTable<LineRecord> object, inputTable, to represent the lines of input text. For partitioned data, use GetPartitionedTable<T> instead of GetTable<T> and pass the method a metadata file. 2. Applies the SelectMany operator to inputTable to transform the collection of lines into collection of words. The String.Split method converts the line into a collection of words. SelectMany concatenates the collections created by Split into a single IQueryable<string> collection named words, which represents all the words in the file. 3. Performs the Map part of the operation by applying GroupBy to the words object. The GroupBy operation groups elements with the same key, which is defined by the selector delegate. This creates a higher order collection, whose elements are groups. In this case, the delegate is an identity function, so the key is the word itself and the operation creates a groups collection that consists of groups of identical words. 4. Performs the Reduce part of the operation by applying Select to groups. This operation reduces the groups of words from Step 3 to an IQueryable<Pair> collection named counts that represents the unique words in the file and how many instances there are of each word. Each key value in groups represents a unique word, so Select creates one Pair object for each unique word. IGrouping.Count returns the number of items in the group, so each Pair object's Count member is set to the number of instances of the word. 5. Applies OrderByDescending to counts. This operation sorts the input collection in descending order of frequency and creates an ordered collection named ordered. 6. Applies Take to ordered to create an IQueryable<Pair> collection named top, which contains the 100 most common words in the input file, and their frequency. Test then uses the Pair object's ToString implementation to print the top one hundred words, and their frequency.   public static IQueryable<Pair> MapReduce( string directory, string fileName, int k) { DryadDataContext ddc = new DryadDataContext("file://" + directory); DryadTable<LineRecord> inputTable = ddc.GetTable<LineRecord>(fileName); IQueryable<string> words = inputTable.SelectMany(x => x.line.Split(' ')); IQueryable<IGrouping<string, string>> groups = words.GroupBy(x => x); IQueryable<Pair> counts = groups.Select(x => new Pair(x.Key, x.Count())); IQueryable<Pair> ordered = counts.OrderByDescending(x => x.Count); IQueryable<Pair> top = ordered.Take(k);   return top; }   To Test: IQueryable<Pair> results = MapReduce(@"c:\DryadData\input", "TestFile.txt", 100); foreach (Pair words in results) Debug.Print(words.ToString());   Note: DryadLINQ applications can use a more compact way to represent the query: return inputTable         .SelectMany(x => x.line.Split(' '))         .GroupBy(x => x)         .Select(x => new Pair(x.Key, x.Count()))         .OrderByDescending(x => x.Count)         .Take(k);     MapReduce using PLINQ The pattern is relevant even for a single multi-core machine, however. We can write our own PLINQ MapReduce in a few lines. the Map function takes a single input value and returns a set of mapped values àLINQ's SelectMany operator. These are then grouped according to an intermediate key à LINQ GroupBy operator. The Reduce function takes each intermediate key and a set of values for that key, and produces any number of outputs per key à LINQ SelectMany again. We can put all of this together to implement MapReduce in PLINQ that returns a ParallelQuery<T> public static ParallelQuery<TResult> MapReduce<TSource, TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source .SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } the map function takes in an input document and outputs all of the words in that document. The grouping phase groups all of the identical words together, such that the reduce phase can then count the words in each group and output a word/count pair for each grouping: var files = Directory.EnumerateFiles(dirPath, "*.txt").AsParallel(); var counts = files.MapReduce( path => File.ReadLines(path).SelectMany(line => line.Split(delimiters)), word => word, group => new[] { new KeyValuePair<string, int>(group.Key, group.Count()) });

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • Agilist, Heal Thyself!

    - by Dylan Smith
    I’ve been meaning to blog about a great experience I had earlier in the year at Prairie Dev Con Calgary.  Myself and Steve Rogalsky did a session that we called “Agilist, Heal Thyself!”.  We used a format that was new to me, but that Steve had seen used at another conference.  What we did was start by asking the audience to give us a list of challenges they had had when adopting agile.  We wrote them all down, then had everybody vote on the most interesting ones.  Then we split into two groups, and each group was assigned one of the agile challenges.  We had 20 minutes to discuss the challenge, and suggest solutions or approaches to improve things.  At the end of the 20 minutes, each of the groups gave a brief summary of their discussion and learning's, then we mixed up the groups and repeated with another 2 challenges. The 2 groups I was part of had some really interesting discussions, and suggestions: Unfinished Stories at the end of Sprints The first agile challenge we tackled, was something that every single Scrum team I have worked with has struggled with.  What happens when you get to the end of a Sprint, and there are some stories that are only partially completed.  The team in question was getting very de-moralized as they felt that every Sprint was a failure as they never had a set of fully completed stories. How do you avoid this? and/or what do you do when it happens? There were 2 pieces of advice that were well received: 1. Try to bring stories to completion before starting new ones.  This is advice I give all my Scrum teams.  If you have a 3-week sprint, what happens all too often is you get to the end of week 2, and a lot of stories are almost done; but almost none are completely done.  This is a Bad Thing.  I encourage the teams I work with to only start a new story as a very last resort.  If you finish your task look at the stories in progress and see if there’s anything you can do to help before moving onto a new story.  In the daily standup, put a focus on seeing what stories got completed yesterday, if a few days go by with none getting completed, be sure this fact is visible to the team and do something about it.  Something I’ve been doing recently is introducing WIP (Work In Progress) limits while using Scrum.  My current team has 2-week sprints, and we usually have about a dozen or stories in a sprint.  We instituted a WIP limit of 4 stories.  If 4 stories have been started but not finished then nobody is allowed to start new stories.  This made it obvious very quickly that our QA tasks were our bottleneck (we have 4 devs, but only 1.5 testers).  The WIP limit forced the developers to start to pickup QA tasks before moving onto the next dev tasks, and we ended our sprints with many more stories completely finished than we did before introducing WIP limits. 2. Rather than using time-boxed sprints, why not just do away with them altogether and go to a continuous flow type approach like KanBan.  Limit WIP to keep things under control, but don’t have a fixed time box at the end of which all tasks are supposed to be done.  This eliminates the problem almost entirely.  At some points in the project (releases) you need to be able to burn down all the half finished stories to get a stable release build, but this probably occurs less often than every sprint, and there are alternative approaches to achieve it using branching strategies rather than forcing your team to try to get to Zero WIP every 2-weeks (e.g. when you are ready for a release, create a new branch for any new stories, but finish all existing stories in the current branch and release it). Trying to Introduce Agile into a team with previous Bad Agile Experiences One of the agile adoption challenges somebody described, was he was in a leadership role on a team he had recently joined – lets call him Dave.  This team was currently very waterfall in their ALM process, but they were about to start on a new green-field project.  Dave wanted to use this new project as an opportunity to do things the “right way”, using an Agile methodology like Scrum, adopting TDD, automated builds, proper branching strategies, etc.  The problem he was facing is everybody else on the team had previously gone through an “Agile Adoption” that was a horrible failure.  Dave blamed this failure on the consultant brought in previously to lead this agile transition, but regardless of the reason, the team had very negative feelings towards agile, and was very resistant to trying it out again.  Dave possibly had the authority to try to force the team to adopt Agile practices, but we all know that doesn’t work very well.  What was Dave to do? Ultimately, the best advice was to question *why* did Dave want to adopt all these various practices. Rather than trying to convince his team that these were the “right way” to run a dev project, and trying to do a Big Bang approach to introducing change.  He would be better served by identifying problems the team currently faces, have a discussion with the team to get everybody to agree that specific problems existed, then have an open discussion about ways to address those problems.  This way Dave could incrementally introduce agile practices, and he doesn’t even need to identify them as “agile” practices if he doesn’t want to.  For example, when we discussed with Dave, he said probably the teams biggest problem was long periods without feedback from users, then finding out too late that the software is not going to meet their needs.  Rather than Dave jumping right to introducing Scrum and all it entails, it would be easier to get buy-in from team if he framed it as a discussion of existing problems, and brainstorming possible solutions.  And possibly most importantly, don’t try to do massive changes all at once with a team that has not bought-into those changes.  Taking an incremental approach has a greater chance of success. I see something similar in my day job all the time too.  Clients who for one reason or another claim to not be fans of agile (or not ready for agile yet).  But then they go on to ask me to help them get shorter feedback cycles, quicker delivery cycles, iterative development processes, etc.  It’s kind of funny at times, sometimes you just need to phrase the suggestions in terms they are using and avoid the word “agile”. PS – I haven’t blogged all that much over the past couple of years, but in an attempt to motivate myself, a few of us have accepted a blogger challenge.  There’s 6 of us who have all put some money into a pool, and the agreement is that we each need to blog at least once every 2-weeks.  The first 2-week period that we miss we’re eliminated.  Last person standing gets the money.  So expect at least one blog post every couple of weeks for the near future (I hope!).  And check out the blogs of the other 5 people in this blogger challenge: Steve Rogalsky: http://winnipegagilist.blogspot.ca Aaron Kowall: http://www.geekswithblogs.net/caffeinatedgeek Tyler Doerkson: http://blog.tylerdoerksen.com David Alpert: http://www.spinthemoose.com Dave White: http://www.agileramblings.com (note: site not available yet.  should be shortly or he owes me some money!)

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • 1600+ 'postfix-queue' processes - OK to have this many?

    - by atomicguava
    I have a Plesk 9.5.4 CentOS server running Postfix. I had been having massive problems with the mailq being full of 'double-bounce' email messages containing errors relating to 'Queue File Write Error', but I believe these are now fixed thanks to this thread. My new problem is that when I run top, I can see lots of processes called 'postfix-queue' and have fairly high load: top - 13:59:44 up 6 days, 21:14, 1 user, load average: 2.33, 2.19, 1.96 Tasks: 1743 total, 1 running, 1742 sleeping, 0 stopped, 0 zombie Cpu(s): 5.1%us, 8.8%sy, 0.0%ni, 85.3%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3145728k total, 1950640k used, 1195088k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1324 apache 16 0 344m 33m 5664 S 21.7 1.1 0:03.17 httpd 32443 apache 15 0 350m 36m 6864 S 14.4 1.2 0:13.83 httpd 1678 root 15 0 13948 2568 952 R 2.0 0.1 0:00.37 top 1890 mysql 15 0 689m 318m 7600 S 1.0 10.4 219:45.23 mysqld 1394 apache 15 0 352m 41m 5972 S 0.7 1.3 0:03.91 httpd 1369 apache 15 0 344m 33m 5444 S 0.3 1.1 0:02.03 httpd 1592 apache 15 0 349m 37m 5912 S 0.3 1.2 0:02.52 httpd 1633 apache 15 0 336m 20m 1828 S 0.3 0.7 0:00.01 httpd 1952 root 19 0 335m 28m 10m S 0.3 0.9 1:35.41 httpd 1 root 15 0 10304 732 612 S 0.0 0.0 0:04.41 init 1034 mhandler 15 0 11520 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1036 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1041 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1043 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1063 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1068 mhandler 15 0 11516 1128 860 S 0.0 0.0 0:00.00 postfix-queue 1071 mhandler 17 0 11512 1152 884 S 0.0 0.0 0:00.00 postfix-queue 1072 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1081 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1082 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1089 popuser 15 0 33892 1972 1200 S 0.0 0.1 0:00.02 pop3d 1116 mhandler 16 0 11516 1164 884 S 0.0 0.0 0:00.00 postfix-queue 1117 mhandler 15 0 11516 1124 860 S 0.0 0.0 0:00.00 postfix-queue 1120 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1121 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1130 mhandler 17 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1131 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1149 root 17 -4 12572 680 356 S 0.0 0.0 0:00.00 udevd 1181 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1183 mhandler 15 0 11512 1116 860 S 0.0 0.0 0:00.00 postfix-queue 1224 mhandler 16 0 11516 1160 884 S 0.0 0.0 0:00.00 postfix-queue 1225 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1228 apache 15 0 345m 34m 5472 S 0.0 1.1 0:04.64 httpd 1241 mhandler 16 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1242 mhandler 15 0 11512 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1251 mhandler 17 0 11516 1156 884 S 0.0 0.0 0:00.00 postfix-queue 1252 mhandler 15 0 11516 1120 860 S 0.0 0.0 0:00.00 postfix-queue 1258 apache 15 0 349m 37m 5444 S 0.0 1.2 0:01.28 httpd When I run ps -Al | grep -c postfix-queue it returns 1618! My question is this: is this normal or is there something else going wrong with Postfix? Right now, if I run mailq it is empty, and qshape deferred / qshape active are empty too. Thanks in advance for your help.

    Read the article

  • Apache Getting Bogged Down By Certain Script (Wp-Cron.php) - How To Kill Process Automatically

    - by user50037
    I have a server that is running a number of wordpress blogs, and a number of them have several hundred/thousand posts. Every couple of days, the server slows to a crawl due to a file being run on Wordpress called WP-cron.php. My entire apache process log turns into this : http:// imgur.com/A7K9k.png Times that by quite a bit. And server no go. Each process takes up about 1.1% of ram. And when we have 50 of them on the go. It gets insane. Not all of them are coming from the same blog, they are pretty widespread. In the Apache process page of WHM, they are usually ALL set to the status of "C", which means closing. But they can sit there until they crash the server, and they still hold the memory. Just google "wp-cron.php load" and you will find plenty of people with similar issues. In anycase, we have think it is down to users adding a tonne of dead "pinglists" to their wordpress installation. Which in turn wordpress loops through them endlessly. Problem number 1. Does anyone have any other suggestions about what would cause the Wordpress file wp-cron.php to loop endlessly. I still think it is down to pings, because all of the people we have contacted about their account load going sky high, have had massive ping lists. Problem number 2. Even if it is down to excessive pinglists in wordpress. We cannot be babying every single account on the server waiting for it to start spawning the wp-cron processes. It often happens overnight, and I start getting SMS alerts at 2am about the load. I have CSF installed, which apparently would have ended the processes if they ran over XXX time. But I have been told that it won't catch the processes because they end up in this state of "closing" (They show up as "C" on the Apache page of WHM). Apparently CSF will only kill processes that are "running" which C does not count. I have seen various other scripts such as : http://dltj.org/article/die-apache-die/ . I took a look at the stat of /proc. But I was boggled at which delimited part was the time running. And if there was any way I could connect it back to an actual Apache process, so that I could see what file was running (So only close connections connected to wp-cron.php, with a state of "C"). Overall I know Problem 2 glosses over the real reason. But I do put the whole thing to excessive pinglists in Wordpress. But I just cannot sit there and babysit every single installation 24/7. So I need a way to save the server when I am not available. Any help would be much appreciated.

    Read the article

  • "power limit notification" clobbering on 12G Dell servers with RHEL6

    - by Andrew B
    Server: Poweredge r620 OS: RHEL 6.4 Kernel: 2.6.32-358.18.1.el6.x86_64 I'm experiencing application alarms in my production environment. Critical CPU hungry processes are being starved of resources and causing a processing backlog. The problem is happening on all the 12th Generation Dell servers (r620s) in a recently deployed cluster. As near as I can tell, instances of this happening are matching up to peak CPU utilization, accompanied by massive amounts of "power limit notification" spam in dmesg. An excerpt of one of these events: Nov 7 10:15:15 someserver [.crit] CPU12: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU6: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU14: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU0: Package power limit notification (total events = 11) Nov 7 10:15:15 someserver [.crit] CPU6: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU14: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU18: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU20: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU2: Package power limit notification (total events = 12) Nov 7 10:15:15 someserver [.crit] CPU10: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Core power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU4: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU16: Package power limit notification (total events = 13) Nov 7 10:15:15 someserver [.crit] CPU20: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU8: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU10: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU22: Package power limit notification (total events = 14) Nov 7 10:15:15 someserver [.crit] CPU15: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU3: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU1: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU5: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU15: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU3: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU1: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU5: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU19: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU17: Package power limit notification (total events = 377) Nov 7 10:15:15 someserver [.crit] CPU9: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU21: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU23: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU11: Core power limit notification (total events = 369) Nov 7 10:15:15 someserver [.crit] CPU13: Package power limit notification (total events = 376) Nov 7 10:15:15 someserver [.crit] CPU7: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU19: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU9: Package power limit notification (total events = 374) Nov 7 10:15:15 someserver [.crit] CPU21: Package power limit notification (total events = 375) Nov 7 10:15:15 someserver [.crit] CPU23: Package power limit notification (total events = 374) A little Google Fu reveals that this is typically associated with the CPU running hot, or voltage regulation kicking in. I don't think that's what is happening though. Temperature sensors for all servers in the cluster are running fine, Power Cap Policy is disabled in the iDRAC, and my System Profile is set to "Performance" on all of these servers: # omreport chassis biossetup | grep -A10 'System Profile' System Profile Settings ------------------------------------------ System Profile : Performance CPU Power Management : Maximum Performance Memory Frequency : Maximum Performance Turbo Boost : Enabled C1E : Disabled C States : Disabled Monitor/Mwait : Enabled Memory Patrol Scrub : Standard Memory Refresh Rate : 1x Memory Operating Voltage : Auto Collaborative CPU Performance Control : Disabled A Dell mailing list post describes the symptoms almost perfectly. Dell suggested that the author try using the Performance profile, but that didn't help. He ended up applying some settings in Dell's guide for configuring a server for low latency environments and one of those settings (or a combination thereof) seems to have fixed the problem. Kernel.org bug #36182 notes that power-limit interrupt debugging was enabled by default, which is causing performance degradation in scenarios where CPU voltage regulation is kicking in. A RHN KB article (RHN login required) mentions a problem impacting PE r620 and r720 servers not running the Performance profile, and recommends an update to a kernel released two weeks ago. ...Except we are running the Performance profile... Everything I can find online is running me in circles here. What's the heck is going on?

    Read the article

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • How to resolve "HTTP/1.1 403 Forbidden" errors from iCal/CalDAV server after upgrade to Snow Leopard Server?

    - by morgant
    We recently upgraded our Open Directory Master & Replica to Mac OS X 10.6.4 Snow Leopard Server. We had a mismatched server FQDN & LDAP Search Base/Kerberos Realm, so we exported all users & groups, created the new Open Directory Master w/matching FQDN & Search Base/Realm, reimported users & groups, and re-bound all servers & workstations to the new OD Master. At the same time as all of this, we upgraded our iCal/CalDAV server to Mac OS X 10.6.4 Snow Leopard Server. Ever since doing so, we've seen the following issues with our iCal/CalDAV server and iCal clients on both Mac OS X 10.5 Leopard & Mac OS X 10.6: If a user attempts to move or delete an event (single or repeating) that was created prior to the upgrade to 10.6 Server, they get the following error: Access to "blah" in "blah" in account "blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. New users added to the directory get the following error when attempting to add their account to in iCal's preferences: The user "blah" has no configured pricipals. Confirm with your network administrator that your account has at least one CalDAV principal configured. Interestingly, we've since discovered that users seem to be able to delete individual events from an old repeating event without error, but that's a massive amount of work to get rid of a repeating event. I will note that we have not yet added an SRV record in DNS as instructed on page 19 of iCal_Server_Admin_v10.6.pdf. Further Investigation: In this particular case, a user is attempting to decline repeating events created prior to the upgrade to Snow Leopard Server. Granting the user full write access with sudo calendarserver_manage_principals --add-write-proxy users:user1 users:user2 (which did work) doesn't allow deletion of the events. Still get the usual error: Access to "blah blah" in "blah blah" in account "blah blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. The error that shows up in /var/log/caldavd/error.log on the iCal Server when attempting to delete one of the events is: 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.extensions#info] PUT /calendars/__uids__/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/calendar/XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.ics HTTP/1.1 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.scheduling.implicit#error] Cannot change ORGANIZER: UID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX And the error in /var/log/system.log on the client is: Mar 17 15:14:30 192-168-21-169-dhcp iCal[33509]: CalDAV CalDAVWriteEntityQueueableOperation failed: status 'HTTP/1.1 403 Forbidden' request:\n\nBEGIN:VCALENDAR^M\nVERSION:2.0^M\nPRODID:-//Apple Inc.//iCal 3.0//EN^M\nCALSCALE:GREGORIAN^M\nBEGIN:VTIMEZONE^M\nTZID:US/Eastern^M\nBEGIN:DAYLIGHT^M\nTZOFFSETFROM:-0500^M\nTZOFFSETTO:-0400^M\nDTSTART:20070311T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU^M\nTZNAME:EDT^M\nEND:DAYLIGHT^M\nBEGIN:STANDARD^M\nTZOFFSETFROM:-0400^M\nTZOFFSETTO:-0500^M\nDTSTART:20071104T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU^M\nTZNAME:EST^M\nEND:STANDARD^M\nEND:VTIMEZONE^M\nBEGIN:VEVENT^M\nSEQUENCE:5^M\nDTSTART;TZID=US/Eastern:20090117T094500^M\nDTSTAMP:20081227T143043Z^M\nSUMMARY:blah blah^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT:urn:uuid^M\n :XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user@d^M\n omain.tld^M\nEXDATE;TZID=US/Eastern:20110319T094500^M\nDTEND;TZID=US/Eastern:20090117T183000^M\nRRULE:FREQ=WEEKLY;INTERVAL=1^M\nTRANSP:OPAQUE^M\nUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nORGANIZER;CN="First Last":mailto:[email protected]^M\nX-WR-ITIPSTATUSML:UNCLEAN^M\nCREATED:20110317T191348Z^M\nEND:VEVENT^M\nEND:VCALENDAR^M\n\n\n... response:\nHTTP/1.1 403 Forbidden^M\nDate: Thu, 17 Mar 2011 19:14:30 GMT^M\nDav: 1, access-control, calendar-access, calendar-schedule, calendar-auto-schedule, calendar-availability, inbox-availability, calendar-proxy, calendarserver-private-events, calendarserver-private-comments, calendarserver-principal-property-search^M\nContent-Type: text/xml^M\nContent-Length: 134^M\nServer: Twisted/8.2.0 TwistedWeb/8.2.0 TwistedCalDAV/2.5 (iCal Server v12.56.21)^M\n^M\n<?xml version='1.0' encoding='UTF-8'?><error xmlns='DAV:'>^M\n <valid-attendee-change xmlns='urn:ietf:params:xml:ns:caldav'/>^M\n</error> One thing I have noticed, and I'm not sure if this has any real effect is that in many of these pre-Snow Leopard Server migration events, the ORGANIZER is specified like the following: ORGANIZER;CN=First Last:mailto:[email protected] But newer ones are more like one of the two following: ORGANIZER;CN=First Last;[email protected];SCHEDULE-STATUS=1 ORGANIZER;CN=First Last;[email protected]:urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX iCal_Server_Admin_v10.6.pdf notes that the ".db.sqlite" files are completely disposable, they're merely a performance cache and are re-built on the fly, so are safe to delete. I did delete the one for the organizer's calendars and it took longer to process the attempted event delete while it rebuilt the database, but still errored out in the end. FWIW the error is thrown by this code: https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/scheduling/implicit.py Any further suggestions? I see lots of questions about this in my Google searches, but not solutions and this is a widespread problem on our iCal Server. Now, we mostly try to get users to ignore them (hence the amount of time this question has been open), but every now and then I dig in deeper trying to find the culprit and/or solution.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37  | Next Page >