Search Results

Search found 14598 results on 584 pages for 'address'.

Page 562/584 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • CodePlex Daily Summary for Monday, December 27, 2010

    CodePlex Daily Summary for Monday, December 27, 2010Popular ReleasesRocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresVCC: Latest build, v2.1.31226.0: Automatic drop of latest buildFxCop Integrator for Visual Studio 2010: FxCop Integrtor 1.1.0: New FeatureSearch violation information FxCop Integrator provides the violation search feature. You can find out specific violation information by simple search expression.Analyze with FxCop project file FxCop Integrator supports code analysis with FxCop project file. You can customize code analysis behavior (e.g. analyze specifid types only, use specific rules only, and so on). ImprovementImproved the code analysis result list to show more information (added Proejct and File column). Change...Flickr Wallpaper Rotator (for Windows desktop): Wallpaper Flickr 1.1: Some minor bugfixes (mostly covering when network connection is flakey, so I discovered them all while at my parents' house for Christmas).Mcopy API: McopyAPI v.1.0.0.2: changed display help "mcopyapi /?" and a few minor changedOPSM: OPSM v2.0: Updated version, 30%-40% faster than v1.0. Requires .NET Framework 2.0People's Note: People's Note 0.19: Added touch scrolling to the note view screen. To install: copy the appropriate CAB file onto your WM device and run it.HSS Core Framework: HSS Core v4.0.700.10: Release v4.0.700.10 December 25th, 2010 Upgrade Instructions from v4.0.700 to v4.0.700.10: (patch release) Uninstall v4.0.700 Install the new v4.0.700.10 Upgrade Instructions: (full upgrade from a version prior to v4.0.700) Uninstall your old version Update the Log Configuration, by changing the logging from Database to Machine Backup and then Truncate the hss logging tables Uninstall the HSSLOG Database using the HSSLOG Database install wizard Uninstall the HSS Core Framework Ins...NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????MiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedMulticore Task Framework: MTF 1.0.1: Release 1.0.1 of Multicore Task Framework.SQL Monitor - tracking sql server activities: SQL Monitor 3.0 alpha 7: 1. added script save/load in user query window 2. fixed problem with connection dialog when choosing windows auth but still ask for user name 3. auto open user table when double click one table node 4. improved alert message, added log only methodEnhSim: EnhSim 2.2.6 ALPHA: 2.2.6 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Fixing up some r...New ProjectsActiveDirectory Object Model Library: The ActiveDirectory Object Model Library, which can operate the AD easily.Allspark: Allspark is an BI focused sharing space.Calendar Control for WP7: Calendar Control for Wp7 (Windows Phone 7). I needed a calendar control for my WP7 app and since WP7 did not have any and I couldn't find anything on the internet I decided to write my own. Please feel free to use it in your project and share if you have improved on it.CryptoPad: CryptoPad is a simple notepad like application that works with encrypted files. Users can open,edit and save encrypted files using CrypoPad without andy other application, everythning is easy and transparent.Entity Visualizer: Entity Framework Debugger VisualizerHTML-IDEx: HTML-IDEx is meant to be an open-source lightweight WYSIWYG HTML editor in which the user can see what they are doing in realtime. Comparable to my other project, Brandons HTML IDE, I'm hoping this to be a huge success.IL Inject: Aspect-oriented programming based on the injection of IL instruction in the methods of an assembly that is marked by attributesIntegração no Trabalho: Este projeto será desenvolvido para ser possível a integração entre funcionarios de uma empresa. O programa funcionará em rede e através dele será possível cadastrar projetos, atividades e seus colegas poderão comentar, criticar e dar dicas nos outros projetos .Knexsys Project: Knexsys, also know as KKD (Knexsys Knowledge Discovery), is a research program that aims to study the capabilities of SQL Server and the .NET framework to implement a rule production engine to mine real-time data. It's developed in C#.mysvn: This project host including some of my own projects. I am interested in WPF and RIA. Furthermore, I want to learn more about C, C++, Java, PHP, Python.OpenTwitter: Twitter clone made with C# and LINQ. this is only a webservice, there is no client for it, the main idea behind opentwitter is to provide a framework that you can use with your own clients ( mobile devices, web pages, etc ) and data provider ( xml, sql server, oracle, etc ).OPSM: OPSM Miner & information projectOutlook UI Tools: Changes to the Outlook UI to make it more usable: * Allow reply to address a recipient instead of the sender * Keep meeting reminders from popping up when overdue many daysSimpleUploadTo: Simple UploadToTFS 4 FPSE: Allow the ability for FrontPage Server Extensions to use TFS as a source control repository.TFS CheckInNotifier: Team Foundation Server 2010 Check-in event notifier desktop application.Universe.WCF.Behaviors: Universe.WCF.Behaviors provides behaviors: - Easy migration from Remoting - Transparent delivery - Traffic Statistics - WCF Streaming Adaptor for BinaryWriter and TextWriter - etc vebshop2: vebshop2Windows Weibo all in one for Sina Sohu and QQ: Windows Weibo all in one for Sina Sohu and QQ.

    Read the article

  • 15 Kasim 2012 Oracle Day

    - by TUFEKCIOGLU,FATIH
       15.Kasim'da Harbiye Istanbul Kongre Merkezi'nde düzenlenecek Oracle Day'e ait etkinlik bilgileri : Oracle Day etkinlik bilgileri için tiklayiniz    En Son Teknolojiden Faydalanin: Inovasyona ve Rekabete Zaman Birakin 15 Kasim 2012 Bulut Bilisim, Mobilite, Sosyal Medya ve Büyük Veri, bildigimiz dünyayi yeniden tanimliyor. Bu teknolojileri kurumuna ilk getirenlerden biri olun; daha hizli yeni ürün ve hizmet gelistirme, müsteri deneyimini iyilestirme ve yeni inovatif is modellerini hayata geçirme firsati yakalayarak rekabetteki konumunuzu güçlendirin. Oracle ve is ortaklari bu noktada size, teknolojik yenilikleri kurumunuza uyarlamanizda yardim ederken, sizin de piyasadaki degisimlerden rakiplerinizden önce avantaj elde etmenizi saglar. Oracle'in, birlikte çalismak için tasarlanmis olan yazilim ve donanimlarda en yeni teknolojileri kullanarak, bilgi teknolojilerini nasil sadelestirdigini ögrenmek için Oracle Day'de bize katilin. Oracle Day'de: Oracle'in Bulut Bilisim, Büyük Veri, Sosyal Medya, is uygulamalari çözümleri hakkinda bilgi edinme, Basarili is dönüsümleri hakkinda örnek basari hikayelerini dinleme, Sizinle ayni zorluklari tecrübe eden sektör çalisanlariyla biraraya gelme, Oracle uzmanlari ve is ortaklari ile tanisma ve yeni ürün tanitimlarini izleme, Oracle OpenWorld'den en yeni ürün bilgilerini edinme firsatini kaçirmayin... Saygilarimizla, Oracle Türkiye Hemen Kaydolun! Platin Sponsor Istanbul Kongre Merkezi Taskisla Caddesi Harbiye 34367 Istanbul / Türkiye 15 Kasim 2012, Persembe 08:30 - 18:30 LCV: [email protected] oracle.com/oracleday Bizi takip edin: #oracleday   Oracle Is Ortagi Müsteri Basari Hikayesi TROUG Sunum Ingilizce'dir  Günün Ajandasi 08:45-09:30 Kayit 09:30-10:00 Hos Geldiniz Filiz Dogan, Genel Müdür, Oracle Türkiye 10:00-10:30 Navigating Complexity by Simplifying I.T. Andrew Mendelsohn, Kidemli Baskan Yardimcisi, Oracle Veritabani Sunucu Teknolojileri, Oracle           10:30-11:00 Dönüsümsel Bulut Yolculugu Ilker Kuruöz, CIO, Turkcell 11:00-11:20 Yeni Dönemde Veri Merkezlerinin Olmazsa Olmazlari Yalim Eristiren, Genel Müdür Yardimcisi, Intel 11:20-11:30 Slimfit Feyza Narli, Is Çözümleri Direktörü, Innova 11:30-12:00 Java ile Inovasyon Cuma Yigit, Teknik Mimar, Etiya Yusuf Tok, Java Grup Yöneticisi, OBSS Ersun Engel, Satis Müdürü, Oracle 12:00-12:10  Kahve Molasi       1. SALON 2. SALON 3. SALON 4. SALON 5. SALON 6. SALON 7. SALON 8. SALON 9. SALON 10. SALON   Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Is Süreçlerinde Degisim Daha Fazla Veri, Daha Hizli Sonuç: Isinizi Analitik Çözümlerle Güçlendirin Peki Ama Nasil? Bulut Uygulayicilari için Çözüm Haritasi Is Uygulamalarinizda Inovasyonun Gücünü Kullanin Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri I Oracle & Is Ortaklari Çözümleri - Basari Hikayeleri II Oracle Finansal Hizmetler - Core Banking and Analytical Solutions Oracle User Group (TROUG) 12:10-12:40 Müsteri Deneyimi: Çalisaninizi Yetkinlestirin. Markanizi Güçlendirin. Tekfen Ceyhan Çelik Fabrikasi Maliyet Yönetimi ve Üretim Takibi Daha Fazla Veriyle, Daha Hizli Hareket: Yaraticiligi Aksiyonla Güçlendirme Architect Your Cloud: A Blueprint for Cloud Builders Technology Strategies that Drive Business Excellence: Get Social. Be Mobile. Run Cloud. Yeni Nesil Veri Merkezi ile BT'nin Gücünü Ortaya Çikartin Innovate with Oracle - Virtual Banking and Self Service Channels What's Next for Oracle Database?   Oracle Innova Oracle Oracle Oracle Oracle Oracle Oracle 12:40-13:40 Ögle Yemegi 13:40-14:10 Uygulamalariniz Artik Bulutta 21. Yüzyilda Finans: Potansiyeli Kullanin - Sonuçlara Ulasin! "Düsünce Hizinda" Intel Islemcili Oracle Büyük Veri ve Is Analitigi Çözümleri Deniz Seviyesinden Bulutlara I CRM'inizi Sosyallestirin: Telaura Sosyal CRM Yeni Nesil Veri Merkezinde Trend: Sadelik Abone Bilgi Yönetim Sistemi (ABYS) Yüksek Oracle Veri Tabani Performansi - Oracle Veri Tabani Oracle Veri Depolama Sistemi ile Entegre Oldugunda Sigortacilikta Finansal Transformasyon ve Entegre Veri Ambari Çözümleri SQL/PLSQL Yeni Özellikler   Oracle Oracle Intel Oracle Etiya Oracle Inspirit Oracle Oraturk TROUG 14:10-14:20  Kahve Molasi 14:20-14:50 Satis ve Pazarlamada Sosyal Mecralar Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Tukas'in Analitik Yolculugu Deniz Seviyesinden Bulutlara II Loupe: IP Tabanli Servisler için Proaktif Izleme Veri Merkezinizdeki Riskleri Ortadan Kaldirin Oracle iAS to WebLogic Migrasyonu Oracle ZFS Storage Appliance - Turkcell Deneyimleri Analytical Transformation - Risk and Finance Together to Address the Regulatory Changes Today and Tomorrow Veri Madenciligi Veritabaninda Yapilir: Uygulamalariyla Oracle R Enterprise ve Oracle Data Mining Opsiyonu   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Gtech Oracle Netas Oracle OBSS Turkcell&Gantek Oracle TROUG 14:50-15:00  Kahve Molasi 15:00-15:30 Yetenek Yönetiminde Entegre Çözümler: Taleo ile Ise Alim Artik Daha Kolay Müsteri Basari Hikayeleri Paneli: Degisim Yolculugu ve Sonuçlari Büyük Veri & Exalytics - Exadata'nin Gelecek Rotasi - Bütünlesik (Engineered) Sistemler'de Ücretsiz Platin Hizmetleri Aksigorta Oracle ATS (Application Testing Suite) ile Uygulamalarini Nasil Test Ediyor? Üstün Performans ve Esneklik ile Servis Seviyenizi Arttirin Akilli Belediyecilik Uygulamalarinda Oracle BPM ile Süreç Yönetimi Teyp ile Uçtan Uca Yedekleme Çözümleri Connecting with Customers to Enhance Revenue Generation: Unleashing the Power of an Enterprise Revenue Management and Billing Solution Günümüzün Uygulama Mimarisi Sorunlari ve Çözüm Önerileri   Oracle Akbank, Teknosa, Dogus Holding, Ceynak Oracle Oracle Aksigorta Oracle Sampas Remivac Oracle TROUG 15:30-15:40  Kahve Molasi 15:40-16:10 JD Edwards Yeni Sürüm ile Satis Agi Yönetimi Oracle Policy Automation ile Türkçe Merkezi Is Kurallari Yönetimi Oracle BI ile Kurumsal Karne Çözümleri Turkcell Süperbulut ile Yazilim Artik Hizmetinizde Bütünlesik (Engineered) Sistemler Artik SAP Müsterilerinde de Fark Yaratiyor Yeni Nesil Veri Merkezi Olusturma: Denenmis ve Ispatlanmis Yöntemler Türk Telekom SOA Projesi Basari Hikayesi Yapi Kredi Sigorta Uygulamalarinda Son Kullanici Deneyimini Nasil Izliyor? 2013 NFC Trendleri Karagöz ile Hacivat Veri Tabaninda   TupperWare & Akademi Danismanlik Oracle Oracle Turkcell Oracle Oracle Türk Telekom Yapi Kredi Sigorta Smartsoft TROUG 16:10-16:20  Kahve Molasi 16:20-16:50 Tutarli, Tekil Veriye Yolculuk: Oracle Ana Veri Yönetimi Turkcell/Superonline'da Varlik Yönetimi Her Tür Verinin Endeca ile Rahat Analizi Bulut Bilisim'de Güvenlik Nasil Saglanir? Rekabette Kazanmak: GoldenGate ile Dogru Kararlari Rakiplerinizden Önce Verin. Veri Merkeziniz için Bulut Altyapi Stratejileri Oracle Orkestrasi: Çok Sesli Yönetime Kulak Verin Lojistik Zekasi / Horoz Lojistik Basari Hikayesi Bütünlesik Sistemler (Engineered Systems) ile Yüksek Performansli Java Uygulamalari International Growth - Helping the Banks Standardize Overseas Operations Oracle Big Data   Oracle Turkcell Oracle Oracle Oracle Oracle Kora & Horoz Lojistik Oracle Oracle TROUG 16:50-18:30  Kokteyl     Eger bir kamu kurumunun/kurulusunun çalisani veya görevlisi iseniz, bu etkinlige iliskin önemli etik kurallara iliskin bilgi için lütfen buraya tiklayiniz Copyright 2012, Oracle and/or its affiliates. All rights reserved. Bize Ulasin | Yasal Uyarilar | Gizlilik Beyani

    Read the article

  • Native packaging for JavaFX

    - by igor
    JavaFX 2.2 adds new packaging option for JavaFX applications, allowing you to package your application as a "native bundle". This gives your users a way to install and run your application without any external dependencies on a system JRE or FX SDK. I'd like to give you an overview of what is it, motivation behind it, and finally explain how to get started with it. Screenshots may give you some idea of user experience but first hand experience is always the best. Before we go into all of the boring details, here are few different flavors of Ensemble for you to try: exe, msi, dmg, rpm installers and zip of linux bundle for non-rpm aware systems. Alternatively, check out native packages for JFXtras 2. Whats wrong with existing deployment options? JavaFX 2 applications are easy to distribute as a standalone application or as an application deployed on the web (embedded in the web page or as link to launch application from the webpage). JavaFX packaging tools, such as ant tasks and javafxpackager utility, simplify the creation of deployment packages even further. Why add new deployment options? JavaFX applications have implicit dependency on the availability of Java and JavaFX runtimes, and while existing deployment methods provide a means to validate the system requirements are met -- and even guide user to perform required installation/upgrades -- they do not fully address all of the important scenarios. In particular, here are few examples: the user may not have admin permissions to install new system software if the application was certified to run in the specific environment (fixed version of Java and JavaFX) then it may be hard to ensure user has this environment due to an autoupdate of the system version of Java/JavaFX (to ensure they are secure). Potentially, other apps may have a requirement for a different JRE or FX version that your app is incompatible with. your distribution channel may disallow dependencies on external frameworks (e.g. Mac AppStore) What is a "native package" for JavaFX application? In short it is  A Wrapper for your JavaFX application that makes is into a platform-specific application bundle Each Bundle is self-contained and includes your application code and resources (same set as need to launch standalone application from jar) Java and JavaFX runtimes (private copies to be used by this application only) native application launcher  metadata (icons, etc.) No separate installation is needed for Java and JavaFX runtimes Can be distributed as .zip or packaged as platform-specific installer No application changes, the same jar app binaries can be deployed as a native bundle, double-clickable jar, applet, or web start app What is good about it: Easy deployment of your application on fresh systems, without admin permissions when using .zip or a user-level installer No-hassle compatibility.  Your application is using a private copy of Java and JavaFX. The developer (you!) controls when these are updated. Easily package your application for Mac AppStore (or Windows, or...) Process name of running application is named after your application (and not just java.exe)  Easily deploy your application using enterprise deployment tools (e.g. deploy as MSI) Support is built in into JDK 7u6 (that includes JavaFX 2.2) Is it a silver bullet for the deployment that other deployment options will be deprecated? No.  There are no plans to deprecate other deployment options supported by JavaFX, each approach addresses different needs. Deciding whether native packaging is a best way to deploy your application depends on your requirements. A few caveats to consider: "Download and run" user experienceUnlike web deployment, the user experience is not about "launch app from web". It is more of "download, install and run" process, and the user may need to go through additional steps to get application launched - e.g. accepting a browser security dialog or finding and launching the application installer from "downloads" folder. Larger download sizeIn general size of bundled application will be noticeably higher than size of unbundled app as a private copy of the JRE and JavaFX are included.  We're working to reduce the size through compression and customizable "trimming", but it will always be substantially larger than than an app that depends on a "system JRE". Bundle per target platformBundle formats are platform specific. Currently a native bundle can only be produced for the same system you are building on.  That is, if you want to deliver native app bundles on Windows, Linux and Mac you will have to build your project on all three platforms. Application updates are the responsibility of developerWeb deployed Java applications automatically download application updates from the web as soon as they are available. The Java Autoupdate mechanism takes care of updating the Java and JavaFX runtimes to latest secure version several times every year. There is no built in support for this in for bundled applications. It is possible to use 3rd party libraries (like Sparkle on Mac) to add autoupdate support at application level.  In a future version of JavaFX we may include built-in support for autoupdate (add yourself as watcher for RT-22211 if you are interested in this) Getting started with native bundles First, you need to get the latest JDK 7u6 beta build (build 14 or later is recommended). On Windows/Mac/Linux it comes with JavaFX 2.2 SDK as part of JDK installation and contains JavaFX packaging tools, including: bin/javafxpackagerCommand line utility to produce JavaFX packages. lib/ant-javafx.jar Set of ant tasks to produce JavaFX packages (most recommended way to deploy apps) For general information on how to use them refer to the Deploying JavaFX Application guide. Once you know how use these tools to package your JavaFX application for other deployment methods there are only a few minor tweaks necessary to produce native bundles: make sure java is used from JDK7u6 bundle you have installed adjust your PATH settings if needed  if you are using ant tasks add "nativeBundles=all" attribute to fx:deploy task if you are using javafxpackager pass "-native" option to deploy command or if you are using makeall command then it will try build native packages by default result bundles will be in the "bundles" folder next to other deployment artifacts Note that building some types of native packages (e.g. .exe or .msi) may require additional free 3rd party software to be installed and available on PATH. As of JDK 7u6 build 14 you could build following types of packages: Windows bundle image EXE Inno Setup 5 or later is required Result exe will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop) Application will be launched at the end of install MSI WiX 3.0 or later is required Result MSI will perform user level installation (no admin permissions are required) At least one shortcut will be created (menu or desktop)  MacOS bundle image dmg (drag and drop) installer Linux bundle image rpm rpmbuild is required shortcut will be added to the programs menu If you are using Netbeans for producing the deployment packages then you will need to add custom build step to the build.xml to execute the fx:deploy task with native bundles enabled. Here is what we do for BrickBreaker sample: <target name="-post-jfx-deploy"> <fx:deploy width="${javafx.run.width}" height="${javafx.run.height}" nativeBundles="all" outdir="${basedir}/${dist.dir}" outfile="${application.title}"> <fx:application name="${application.title}" mainClass="${javafx.main.class}"> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="BrickBreaker.jar"/> </fx:resources> <info title="${application.title}" vendor="${application.vendor}"/> </fx:application> </fx:deploy> </target> This is pretty much regular use of fx:deploy task, the only special thing here is nativeBundles="all". Perhaps the easiest way to try building native bundles is to download the latest JavaFX samples bundle and build Ensemble, BrickBreaker or SwingInterop. Please give it a try and share your experience. We need your feedback! BTW, do not hesitate to file bugs and feature requests to JavaFX bug database! Wait! How can i ... This entry is not a comprehensive guide into native bundles, and we plan to post on this topic more. However, I am sure that once you play with native bundles you will have a lot of questions. We may not have all the answers, but please do not hesitate to ask! Knowing all of the questions is the first step to finding all of the answers.

    Read the article

  • How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel

    - by Javier Treviño
    Fetching data from a database to then get it into an Excel spreadsheet to do analysis, reporting, transforming, sharing, etc. is a very common task among users. There are several ways to extract data from a MySQL database to then import it to Excel; for example you can use the MySQL Connector/ODBC to configure an ODBC connection to a MySQL database, then in Excel use the Data Connection Wizard to select the database and table from which you want to extract data from, then specify what worksheet you want to put the data into.  Another way is to somehow dump a comma delimited text file with the data from a MySQL table (using the MySQL Command Line Client, MySQL Workbench, etc.) to then in Excel open the file using the Text Import Wizard to attempt to correctly split the data in columns. These methods are fine, but involve some degree of technical knowledge to make the magic happen and involve repeating several steps each time data needs to be imported from a MySQL table to an Excel spreadsheet. So, can this be done in an easier and faster way? With MySQL for Excel you can. MySQL for Excel features an Import MySQL Data action where you can import data from a MySQL Table, View or Stored Procedure literally with a few clicks within Excel.  Following is a quick guide describing how to import data using MySQL for Excel. This guide assumes you already have a working MySQL Server instance, Microsoft Office Excel 2007 or 2010 and MySQL for Excel installed. 1. Opening MySQL for Excel Being an Excel Add-In, MySQL for Excel is opened from within Excel, so to use it open Excel, go to the Data tab located in the Ribbon and click MySQL for Excel at the far right of the Ribbon. 2. Creating a MySQL Connection (may be optional) If you have MySQL Workbench installed you will automatically see the same connections that you can see in MySQL Workbench, so you can use any of those and there may be no need to create a new connection. If you want to create a new connection (which normally you will do only once), in the Welcome Panel click New Connection, which opens the Setup New Connection dialog. Here you only need to give your new connection a distinctive Connection Name, specify the Hostname (or IP address) where the MySQL Server instance is running on (if different than localhost), the Port to connect to and the Username for the login. If you wish to test if your setup is good to go, click Test Connection and an information dialog will pop-up stating if the connection is successful or errors were found. 3.Opening a connection to a MySQL Server To open a pre-configured connection to a MySQL Server you just need to double-click it, so the Connection Password dialog is displayed where you enter the password for the login. 4. Selecting a MySQL Schema After opening a connection to a MySQL Server, the Schema Selection Panel is shown, where you can select the Schema that contains the Tables, Views and Stored Procedures you want to work with. To do so, you just need to either double-click the desired Schema or select it and click Next >. 5. Importing data… All previous steps were really the basic minimum needed to drill-down to the DB Object Selection Panel  where you can see the Database Objects (grouped by type: Tables, Views and Procedures in that order) that you want to perform actions against; in the case of this guide, the action of importing data from them. a. From a MySQL Table To import from a Table you just need to select it from the list of Database Objects’ Tables group, after selecting it you will note actions below the list become available; then click Import MySQL Data. The Import Data dialog is displayed; you can see some basic information here like the name of the Excel worksheet the data will be imported to (in the window title), the Table Name, the total Row Count and a 10 row preview of the data meant for the user to see the columns that the table contains and to provide a way to select which columns to import. The Import Data dialog is designed with defaults in place so all data is imported (all rows and all columns) by just clicking Import; this is important to minimize the number of clicks needed to get the job done. After the import is performed you will have the data in the Excel worksheet formatted automatically. If you need to override the defaults in the Import Data dialog to change the columns selected for import or to change the number of imported rows you can easily do so before clicking Import. In the screenshot below the defaults are overridden to import only the first 3 columns and rows 10 – 60 (Limit to 50 Rows and Start with Row 10). If the number of rows to be imported exceeds the maximum number of rows Excel can hold in its worksheet, a warning will be displayed in the dialog, meaning the imported number of rows will be limited by that maximum number (65,535 rows if the worksheet is in Compatibility Mode).  In the screenshot below you can see the Table contains 80,559 rows, but only 65,534 rows will be imported since the first row is used for the column names if the Include Column Names as Headers checkbox is checked. b. From a MySQL View Similar to the way of importing from a Table, to import from a View you just need to select it from the list of Database Objects’ Views group, then click Import MySQL Data. The Import Data dialog is displayed; identically to the way everything looks when importing from a table, the dialog displays the View Name, the total Row Count and the data preview grid. Since Views are really a filtered way to display data from Tables, it is actually as if we are extracting data from a Table; so the Import Data dialog is actually identical for those 2 Database Objects. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot. Note that you can override the defaults in the Import Data dialog in the same way described above for importing data from Tables. Also the Compatibility Mode warning will be displayed if data exceeds the maximum number of rows explained before. c. From a MySQL Procedure Too import from a Procedure you just need to select it from the list of Database Objects’ Procedures group (note you can see Procedures here but not Functions since these return a single value, so by design they are filtered out). After the selection is made, click Import MySQL Data. The Import Data dialog is displayed, but this time you can see it looks different to the one used for Tables and Views.  Given the nature of Store Procedures, they require first that values are supplied for its Parameters and also Procedures can return multiple Result Sets; so the Import Data dialog shows the Procedure Name and the Procedure Parameters in a grid where their values are input. After you supply the Parameter Values click Call. After calling the Procedure, the Result Sets returned by it are displayed at the bottom of the dialog; output parameters and the return value of the Procedure are appended as the last Result Set of the group. You can see each Result Set is displayed as a tab so you can see a preview of the returned data.  You can specify if you want to import the Selected Result Set (default), All Result Sets – Arranged Horizontally or All Result Sets – Arranged Vertically using the Import drop-down list; then click Import. After the import is performed, the data in the Excel spreadsheet looks like the following screenshot.  Note in this example all Result Sets were imported and arranged vertically. As you can see using MySQL for Excel importing data from a MySQL database becomes an easy task that requires very little technical knowledge, so it can be done by any type of user. Hope you enjoyed this guide! Remember that your feedback is very important for us, so drop us a message: MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ Forum - http://forums.mysql.com/list.php?172 Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • Windows Azure Evolution &ndash; Preview Developer Portal

    - by Shaun
    With the MEET Windows Azure event on 7th June, there are many new features and updates in windows azure platform. In the coming several posts I will try to cover some of them. And in the first post here I would like to just have a quick walkthrough of the new preview developer portal.   History of the Developer Portal If you have been working with windows azure since 2009 or 2010, you should remember the first version of the developer portal. It was built in HTML with very limited features. I have the impression when I was using is old one. The layout is not that attractive and you have very limited features. On November, 2010 alone with the SDK 1.3 release, the developer portal was getting a big jump. In order to give more usability and features this it turned to be built on Silverlight. Hence it runs like a desktop application with many windows, lists, commands and context menus. From 2010 till now many features were involved into this portal, such as the remote desktop, co-admin, virtual connect, VM role, etc.. And the portal itself became more and more complicated. But it brought some problems by using the Silverlight. The first one is the browser capability. As you know in most mobile and tablet device the browser doesn’t allow the rich content plugin, such as Flash and Silverlight. This means people cannot open and configure their azure services from their iPad, iPhone and Windows Phone, etc., even though what they need may just be restart a hosted service, or view the status of their databases. Another problem is the performance. Silverlight provides rich experience to the users, but also needs more bandwidth. So in this upgrade the preview developer portal will be back to use HTML, with JavaScript, as a mobile friendly, cross browser, interactively web site.   Preview Portal vs. Silverlight Portal Before I started to talk about the new preview portal I’d better highlight that, this preview portal is a PREVIEW version, which means even though you can do almost all features that already in the old one, as long as some cool new features I will mention in the coming several posts, there are something still under developed and migrated. So sometimes you need to switch back to the old one. For example, in preview portal there is no co-admin manage function, no remote desktop function and the SQL database manage function will take you back to the old SQL Azure Manage Portal. But as Microsoft said these missing features will be moved in the preview portal in the couple of next few months. Since the public URL of the developer portal, https://windows.azure.com/, had been changed to point to this preview one, you need to click to preview button on top of the page and click the “Take me to the previous portal” link.   Overview There are four parts in the preview portal. On the top is the header which shows the account you are currently logging in. If you click on the header it will show the top menu of windows azure, where you can navigate to the windows azure home page, the price information page, community and account, etc.. The navigation bar is on the left hand side, with the categories listed below. ALL ITEMS All items in your windows azure account, includes the web sites, services, databases, etc.. WEB SITES The web sites in your windows azure account. It will only show the web sites you have. The linked resources will be shown if you drill down into a web site. VIRTUAL MACHINES The virtual machines that you had been deployed to azure. CLOUD SERVICES All windows azure hosted services in your account. SQL DATABASES All SQL databases (SQL Azure) in your account. STORAGE All windows azure storage services in your account. NETWORKS The virtual network (Windows Azure Connect) you had been created. The available items will be listed in the main part of the page based on which category your currently selected. If there’s no item it will show the link to you to quick create. At the bottom of the page there will be the command and information bar. Based on what is selected and what is performed by the user, it will show the related information and commands. For example, in the image below when I was creating a new web site, the information bar told me that my web site is being provisioned; and there are two commands in the command bar. And once it ready the command bar will show some commands that I can do to my new web site. The “Web Sites” is a new feature introduced alone with this upgrade. It gives us an easier and quicker way to establish a website from the scratch or from some existing library. I will introduce it more details in the coming next post. Also in the command bar you can create a service by clicking the NEW button. It will slide the creation panel up to you.   Where’s My Hosted Services The Windows Azure Hosted Services had been renamed to the Cloud Services. Create a new service would be very easy. Just click the NEW button at the bottom of the page, and select the CLOUD SERVICE and QIUICK CREATE. This will create a blank hosted service without deployment and certificate. It just needs you to specify the service URL and the affinity/region. Then the service will be shown in the list. If you clicked the item all information will be shown in the main part. Since there’s no package deployed to this service so currently we cannot see any information about it. But we can upload the package by using the command at the bottom. And as you can see, we could manage the configuration, instances, certificates and we can scale up and down (change the VM size), in and out (increase and decrease the instance count) to our service. Assuming I had created an ASP.NET MVC 3 web role project in Visual Studio and completed the package. Then I can click the UPLOAD button in this page to deploy my package. In the popping up window I just specify my deployment name, package file and configure file. Also I can check the box below so that it will NOT warn me if only one instance of this deployment. Once we clicked the OK button our package will be uploaded and provisioned by the platform. After a while we can see the service was ready from the information bar. We can have the basic information about this service and deployment if we to the dashboard page. For example the usage overview diagram, status, URL, public IP address, etc.. In the configure page we can view and change the CSCFG content such as the monitor setting, connection strings, OS family. In scale page we can increase and decrease the count of the instances. And in the instances page we can view all instances status. And, if your services is using some SQL databases and storages they will be shown as the linked resources under the linked resources page. And you can manage the certificates of this service as well under the certificates page.   How About My Storage Services The storage service can be managed by clicking into the STORAGES link in the navigation bar. And we can create a new storage service from the NEW button. After specify the storage name and region it will be previsioned by the platform. If you want to copy or manage the storage key you can just click the Manage Keys button at the bottom, which is very easy. What I want to highlight here is that, you can monitor your storage service by enabling the monitor configuration. Click the storage item in the list and navigate to the configure page. As you can see in the page you can enable the monitoring for blob, table and queue. And you can also enable the logging when any requests come to the storage. But as the tooltip shown in the page, enabling the monitoring and logging will increase the usage of the storage, which means increase the bill of them. So make sure you enable them properly.   And My SQL Databases (SQL Azure) The last thing I want to quick introduce is the SQL databases, which was formally named SQL Azure. You can create a new SQL Database Server and a new database by clicking the ADD button under the SQL Database navigation item. In the popping up windows just specify the database name, the edition, size, collation and the server. You can select an existing SQL Database Server if you have, or cerate a new one. If you selected to create a new server, there will be another step you need to do, which is specify the server login, password and the region. Once it ready you can mange your databases as well as the servers in the portal. In a particular server you can update the firewall settings in its Configure page. So, What Else There are some other area on the preview portal I didn’t cover, such as the virtual machines, virtual network and web sites. Regarding the virtual machines and web sites I will talk about them in the future separated post. Regarding the virtual network, it the Windows Azure Connect we are familiar with. But as I mention in the beginning of this post, the preview portal is still under developed. Some features are not available here. For example, you cannot manage the co-admin of your subscriptions, you cannot open the remote desktop on your hosted services, and you cannot navigate to the Windows Azure Service Bus, Access Control and Caching, which formally named Windows Azure AppFabric directly. In these cases you need to navigate back to the old portal. So in the coming several months we might need to use both these two sites.   Summary In this post I quick introduced the new windows azure developer portal. Since it had been rearranged and renamed I demonstrated some features that existing in the old portal, such as how to create and deploy a hosted service, how to provision a storage service and SQL database. All features in the old portal had been, is being and will be migrated into this new portal, but some of them were in a different category and page we need to figure out.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • panic! apache2 cannot start due to unable to read error.log !

    - by vvvvvvv
    apache2 fails to start because it cannot open error.log I've checked the syslog, and found something troubling... Jan 20 02:58:18 unassigned sm-mta[3559]: o0FAD04C017861: to=<[email protected]>, delay=4+22:45:18, xdelay=00:06:18, mailer=esmtp, pri=63390823, relay=asdfa$ Jan 20 03:00:01 unassigned /USR/SBIN/CRON[3939]: (root) CMD (if [ -x /usr/bin/vnstat ] && [ `ls /var/lib/vnstat/ | wc -l` -ge 1 ]; then /usr/bin/vnstat -u; f$ Jan 20 03:00:01 unassigned /USR/SBIN/CRON[3944]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jan 20 03:00:01 unassigned /USR/SBIN/CRON[3949]: (www-data) CMD ([ -x /usr/lib/cgi-bin/awstats.pl -a -f /etc/awstats/awstats.conf -a -r /var/log/apache/acces$ Jan 20 03:02:48 unassigned kernel: [371919.642705] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:02:48 unassigned kernel: [371919.642754] ata3.00: BMDMA stat 0x24 Jan 20 03:02:48 unassigned kernel: [371919.642779] ata3.00: cmd 25/00:08:37:59:e2/00:00:42:00:00/e0 tag 0 dma 4096 in Jan 20 03:02:48 unassigned kernel: [371919.642780] res 51/01:00:37:59:e2/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:02:48 unassigned kernel: [371919.642824] ata3.00: status: { DRDY ERR } Jan 20 03:02:48 unassigned kernel: [371919.657647] ata3.00: configured for UDMA/133 Jan 20 03:02:48 unassigned kernel: [371919.657661] ata3: EH complete Jan 20 03:02:50 unassigned kernel: [371921.857580] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:02:50 unassigned kernel: [371921.857620] ata3.00: BMDMA stat 0x24 Jan 20 03:02:50 unassigned kernel: [371921.857645] ata3.00: cmd 25/00:08:37:59:e2/00:00:42:00:00/e0 tag 0 dma 4096 in Jan 20 03:02:50 unassigned kernel: [371921.857646] res 51/01:00:37:59:e2/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:02:50 unassigned kernel: [371921.857688] ata3.00: status: { DRDY ERR } Jan 20 03:02:50 unassigned kernel: [371921.881468] ata3.00: configured for UDMA/133 Jan 20 03:02:54 unassigned kernel: [371921.881479] ata3: EH complete Jan 20 03:02:54 unassigned kernel: [371924.081382] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:02:54 unassigned kernel: [371924.081417] ata3.00: BMDMA stat 0x24 Jan 20 03:02:54 unassigned kernel: [371924.081443] ata3.00: cmd 25/00:08:37:59:e2/00:00:42:00:00/e0 tag 0 dma 4096 in Jan 20 03:02:54 unassigned kernel: [371924.081444] res 51/01:00:37:59:e2/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:02:54 unassigned kernel: [371924.081487] ata3.00: status: { DRDY ERR } Jan 20 03:02:54 unassigned kernel: [371924.105252] ata3.00: configured for UDMA/133 Jan 20 03:02:54 unassigned kernel: [371924.105261] ata3: EH complete Jan 20 03:02:54 unassigned kernel: [371933.656925] sd 2:0:0:0: [sda] 1465149168 512-byte hardware sectors (750156 MB) Jan 20 03:02:54 unassigned kernel: [371933.656941] sd 2:0:0:0: [sda] Write Protect is off Jan 20 03:02:54 unassigned kernel: [371933.656944] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 20 03:02:54 unassigned kernel: [371933.656956] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 03:02:54 unassigned kernel: [371933.656972] sd 2:0:0:0: [sda] 1465149168 512-byte hardware sectors (750156 MB) Jan 20 03:02:54 unassigned kernel: [371933.656979] sd 2:0:0:0: [sda] Write Protect is off Jan 20 03:02:54 unassigned kernel: [371933.656982] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 20 03:02:54 unassigned kernel: [371933.656993] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 03:03:34 unassigned kernel: [371966.060069] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371971.776846] ata3.00: BMDMA stat 0x24 Jan 20 03:05:48 unassigned kernel: [371971.776871] ata3.00: cmd 25/00:18:87:10:ee/00:00:42:00:00/e0 tag 0 dma 12288 in Jan 20 03:05:48 unassigned kernel: [371971.776872] res 51/01:00:87:10:ee/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:05:48 unassigned kernel: [371971.776914] ata3.00: status: { DRDY ERR } Jan 20 03:05:48 unassigned kernel: [371971.800668] ata3.00: configured for UDMA/133 Jan 20 03:05:48 unassigned kernel: [371971.800687] ata3: EH complete Jan 20 03:05:48 unassigned kernel: [371974.157850] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371974.157885] ata3.00: BMDMA stat 0x24 Jan 20 03:05:48 unassigned kernel: [371974.157911] ata3.00: cmd 25/00:18:87:10:ee/00:00:42:00:00/e0 tag 0 dma 12288 in Jan 20 03:05:48 unassigned kernel: [371974.157912] res 51/01:00:88:10:ee/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:05:48 unassigned kernel: [371974.157956] ata3.00: status: { DRDY ERR } Jan 20 03:05:48 unassigned kernel: [371974.179773] ata3.00: configured for UDMA/133 Jan 20 03:05:48 unassigned kernel: [371974.179786] ata3: EH complete Jan 20 03:05:48 unassigned kernel: [371976.398570] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371976.398610] ata3.00: BMDMA stat 0x24 Jan 20 03:05:48 unassigned kernel: [371976.398635] ata3.00: cmd 25/00:18:87:10:ee/00:00:42:00:00/e0 tag 0 dma 12288 in Jan 20 03:05:48 unassigned kernel: [371976.398636] res 51/01:00:88:10:ee/01:00:42:00:00/e0 Emask 0x1 (device error) Jan 20 03:05:48 unassigned kernel: [371976.398678] ata3.00: status: { DRDY ERR } Jan 20 03:05:48 unassigned kernel: [371976.423477] ata3.00: configured for UDMA/133 Jan 20 03:05:48 unassigned kernel: [371976.423495] sd 2:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE,SUGGEST_OK Jan 20 03:05:48 unassigned kernel: [371976.423498] sd 2:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] Jan 20 03:05:48 unassigned kernel: [371976.423501] Descriptor sense data with sense descriptors (in hex): Jan 20 03:05:48 unassigned kernel: [371976.423503] 72 03 13 00 00 00 00 0c 00 0a 80 00 00 00 00 00 Jan 20 03:05:48 unassigned kernel: [371976.423508] 42 ee 10 88 Jan 20 03:05:48 unassigned kernel: [371976.423510] sd 2:0:0:0: [sda] Add. Sense: Address mark not found for data field Jan 20 03:05:48 unassigned kernel: [371976.423515] end_request: I/O error, dev sda, sector 1122898056 Jan 20 03:05:48 unassigned kernel: [371976.423536] ata3: EH complete Jan 20 03:05:48 unassigned kernel: [371978.630504] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Jan 20 03:05:48 unassigned kernel: [371978.630547] ata3.00: BMDMA stat 0x24

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • T-SQL Tuesday #53-Matt's Making Me Do This!

    - by Most Valuable Yak (Rob Volk)
    Hello everyone! It's that time again, time for T-SQL Tuesday, the wonderful blog series started by Adam Machanic (b|t). This month we are hosted by Matt Velic (b|t) who asks the question, "Why So Serious?", in celebration of April Fool's Day. He asks the contributors for their dirty tricks. And for some reason that escapes me, he and Jeff Verheul (b|t) seem to think I might be able to write about those. Shocked, I am! Nah, not really. They're absolutely right, this one is gonna be fun! I took some inspiration from Matt's suggestions, namely Resource Governor and Login Triggers.  I've done some interesting login trigger stuff for a presentation, but nothing yet with Resource Governor. Best way to learn it! One of my oldest pet peeves is abuse of the sa login. Don't get me wrong, I use it too, but typically only as SQL Agent job owner. It's been a while since I've been stuck with it, but back when I started using SQL Server, EVERY application needed sa to function. It was hard-coded and couldn't be changed. (welllllll, that is if you didn't use a hex editor on the EXE file, but who would do such a thing?) My standard warning applies: don't run anything on this page in production. In fact, back up whatever server you're testing this on, including the master database. Snapshotting a VM is a good idea. Also make sure you have other sysadmin level logins on that server. So here's a standard template for a logon trigger to address those pesky sa users: CREATE TRIGGER SA_LOGIN_PRIORITY ON ALL SERVER WITH ENCRYPTION, EXECUTE AS N'sa' AFTER LOGON AS IF ORIGINAL_LOGIN()<>N'sa' OR APP_NAME() LIKE N'SQL Agent%' RETURN; -- interesting stuff goes here GO   What can you do for "interesting stuff"? Books Online limits itself to merely rolling back the logon, which will throw an error (and alert the person that the logon trigger fired).  That's a good use for logon triggers, but really not tricky enough for this blog.  Some of my suggestions are below: WAITFOR DELAY '23:59:59';   Or: EXEC sp_MSforeach_db 'EXEC sp_detach_db ''?'';'   Or: EXEC msdb.dbo.sp_add_job @job_name=N'`', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @delete_level=3; EXEC msdb.dbo.sp_add_jobserver @job_name=N'`', @server_name=@@SERVERNAME; EXEC msdb.dbo.sp_add_jobstep @job_name=N'`', @step_id=1, @step_name=N'`', @command=N'SHUTDOWN;'; EXEC msdb.dbo.sp_start_job @job_name=N'`';   Really, I don't want to spoil your own exploration, try it yourself!  The thing I really like about these is it lets me promote the idea that "sa is SLOW, sa is BUGGY, don't use sa!".  Before we get into Resource Governor, make sure to drop or disable that logon trigger. They don't work well in combination. (Had to redo all the following code when SSMS locked up) Resource Governor is a feature that lets you control how many resources a single session can consume. The main goal is to limit the damage from a runaway query. But we're not here to read about its main goal or normal usage! I'm trying to make people stop using sa BECAUSE IT'S SLOW! Here's how RG can do that: USE master; GO CREATE FUNCTION dbo.SA_LOGIN_PRIORITY() RETURNS sysname WITH SCHEMABINDING, ENCRYPTION AS BEGIN RETURN CASE WHEN ORIGINAL_LOGIN()=N'sa' AND APP_NAME() NOT LIKE N'SQL Agent%' THEN N'SA_LOGIN_PRIORITY' ELSE N'default' END END GO CREATE RESOURCE POOL SA_LOGIN_PRIORITY WITH ( MIN_CPU_PERCENT = 0 ,MAX_CPU_PERCENT = 1 ,CAP_CPU_PERCENT = 1 ,AFFINITY SCHEDULER = (0) ,MIN_MEMORY_PERCENT = 0 ,MAX_MEMORY_PERCENT = 1 -- ,MIN_IOPS_PER_VOLUME = 1 ,MAX_IOPS_PER_VOLUME = 1 -- uncomment for SQL Server 2014 ); CREATE WORKLOAD GROUP SA_LOGIN_PRIORITY WITH ( IMPORTANCE = LOW ,REQUEST_MAX_MEMORY_GRANT_PERCENT = 1 ,REQUEST_MAX_CPU_TIME_SEC = 1 ,REQUEST_MEMORY_GRANT_TIMEOUT_SEC = 1 ,MAX_DOP = 1 ,GROUP_MAX_REQUESTS = 1 ) USING SA_LOGIN_PRIORITY; ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION=dbo.SA_LOGIN_PRIORITY); ALTER RESOURCE GOVERNOR RECONFIGURE;   From top to bottom: Create a classifier function to determine which pool the session should go to. More info on classifier functions. Create the pool and provide a generous helping of resources for the sa login. Create the workload group and further prioritize those resources for the sa login. Apply the classifier function and reconfigure RG to use it. I have to say this one is a bit sneakier than the logon trigger, least of all you don't get any error messages.  I heartily recommend testing it in Management Studio, and click around the UI a lot, there's some fun behavior there. And DEFINITELY try it on SQL 2014 with the IO settings included!  You'll notice I made allowances for SQL Agent jobs owned by sa, they'll go into the default workload group.  You can add your own overrides to the classifier function if needed. Some interesting ideas I didn't have time for but expect you to get to before me: Set up different pools/workgroups with different settings and randomize which one the classifier chooses Do the same but base it on time of day (Books Online example covers this)... Or, which workstation it connects from. This can be modified for certain special people in your office who either don't listen, or are attracted (and attractive) to you. And if things go wrong you can always use the following from another sysadmin or Dedicated Admin connection: ALTER RESOURCE GOVERNOR DISABLE;   That will let you go in and either fix (or drop) the pools, workgroups and classifier function. So now that you know these types of things are possible, and if you are tired of your team using sa when they shouldn't, I expect you'll enjoy playing with these quite a bit! Unfortunately, the aforementioned Dedicated Admin Connection kinda poops on the party here.  Books Online for both topics will tell you that the DAC will not fire either feature. So if you have a crafty user who does their research, they can still sneak in with sa and do their bidding without being hampered. Of course, you can still detect their login via various methods, like a server trace, SQL Server Audit, extended events, and enabling "Audit Successful Logins" on the server.  These all have their downsides: traces take resources, extended events and SQL Audit can't fire off actions, and enabling successful logins will bloat your error log very quickly.  SQL Audit is also limited unless you have Enterprise Edition, and Resource Governor is Enterprise-only.  And WORST OF ALL, these features are all available and visible through the SSMS UI, so even a doofus developer or manager could find them. Fortunately there are Event Notifications! Event notifications are becoming one of my favorite features of SQL Server (keep an eye out for more blogs from me about them). They are practically unknown and heinously underutilized.  They are also a great gateway drug to using Service Broker, another great but underutilized feature. Hopefully this will get you to start using them, or at least your enemies in the office will once they read this, and then you'll have to learn them in order to fix things. So here's the setup: USE msdb; GO CREATE PROCEDURE dbo.SA_LOGIN_PRIORITY_act WITH ENCRYPTION AS DECLARE @x XML, @message nvarchar(max); RECEIVE @x=CAST(message_body AS XML) FROM SA_LOGIN_PRIORITY_q; IF @x.value('(//LoginName)[1]','sysname')=N'sa' AND @x.value('(//ApplicationName)[1]','sysname') NOT LIKE N'SQL Agent%' BEGIN -- interesting activation procedure stuff goes here END GO CREATE QUEUE SA_LOGIN_PRIORITY_q WITH STATUS=ON, RETENTION=OFF, ACTIVATION (PROCEDURE_NAME=dbo.SA_LOGIN_PRIORITY_act, MAX_QUEUE_READERS=1, EXECUTE AS OWNER); CREATE SERVICE SA_LOGIN_PRIORITY_s ON QUEUE SA_LOGIN_PRIORITY_q([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); CREATE EVENT NOTIFICATION SA_LOGIN_PRIORITY_en ON SERVER WITH FAN_IN FOR AUDIT_LOGIN TO SERVICE N'SA_LOGIN_PRIORITY_s', N'current database' GO   From top to bottom: Create activation procedure for event notification queue. Create queue to accept messages from event notification, and activate the procedure to process those messages when received. Create service to send messages to that queue. Create event notification on AUDIT_LOGIN events that fire the service. I placed this in msdb as it is an available system database and already has Service Broker enabled by default. You should change this to another database if you can guarantee it won't get dropped. So what to put in place for "interesting activation procedure code"?  Hmmm, so far I haven't addressed Matt's suggestion of writing a lengthy script to send an annoying message: SET @[email protected]('(//HostName)[1]','sysname') + N' tried to log in to server ' + @x.value('(//ServerName)[1]','sysname') + N' as SA at ' + @x.value('(//StartTime)[1]','sysname') + N' using the ' + @x.value('(//ApplicationName)[1]','sysname') + N' program. That''s why you''re getting this message and the attached pornography which' + N' is bloating your inbox and violating company policy, among other things. If you know' + N' this person you can go to their desk and hit them, or use the following SQL to end their session: KILL ' + @x.value('(//SPID)[1]','sysname') + N'; Hopefully they''re in the middle of a huge query that they need to finish right away.' EXEC msdb.dbo.sp_send_dbmail @recipients=N'[email protected]', @subject=N'SA Login Alert', @query_result_width=32767, @body=@message, @query=N'EXEC sp_readerrorlog;', @attach_query_result_as_file=1, @query_attachment_filename=N'UtterlyGrossPorn_SeriouslyDontOpenIt.jpg' I'm not sure I'd call that a lengthy script, but the attachment should get pretty big, and I'm sure the email admins will love storing multiple copies of it.  The nice thing is that this also fires on Dedicated Admin connections! You can even identify DAC connections from the event data returned, I leave that as an exercise for you. You can use that info to change the action taken by the activation procedure, and since it's a stored procedure, it can pretty much do anything! Except KILL the SPID, or SHUTDOWN the server directly.  I'm still working on those.

    Read the article

  • EM12c Release 4: Cloud Control to Major Tom...

    - by abulloch
    With the latest release of Enterprise Manager 12c, Release 4 (12.1.0.4) the EM development team has added new functionality to assist the EM Administrator to monitor the health of the EM infrastructure.   Taking feedback delivered from customers directly and through customer advisory boards some nice enhancements have been made to the “Manage Cloud Control” sections of the UI, commonly known in the EM community as “the MTM pages” (MTM stands for Monitor the Monitor).  This part of the EM Cloud Control UI is viewed by many as the mission control for EM Administrators. In this post we’ll highlight some of the new information that’s on display in these redesigned pages and explain how the information they present can help EM administrators identify potential bottlenecks or issues with the EM infrastructure. The first page we’ll take a look at is the newly designed Repository information page.  You can get to this from the main Setup menu, through Manage Cloud Control, then Repository.  Once this page loads you’ll see the new layout that includes 3 tabs containing more drill-down information. The Repository Tab The first tab, Repository, gives you a series of 6 panels or regions on screen that display key information that the EM Administrator needs to review from time to time to ensure that their infrastructure is in good health. Rather than go through every panel let’s call out a few and let you explore the others later yourself on your own EM site.  Firstly, we have the Repository Details panel. At a glance the EM Administrator can see the current version of the EM repository database and more critically, three important elements of information relating to availability and reliability :- Is the database in Archive Log mode ? Is the database using Flashback ? When was the last database backup taken ? In this test environment above the answers are not too worrying, however, Production environments should have at least Archivelog mode enabled, Flashback is a nice feature to enable prior to upgrades (for fast rollback) and all Production sites should have a backup.  In this case the backup information in the Control file indicates there’s been no recorded backups taken. The next region of interest to note on this page shows key information around the Repository configuration, specifically, the initialisation parameters (from the spfile). If you’re storing your EM Repository in a Cluster Database you can view the parameters on each individual instance using the Instance Name drop-down selector in the top right of the region. Additionally, you’ll note there is now a check performed on the active configuration to ensure that you’re using, at the very least, Oracle minimum recommended values.  Should the values in your EM Repository not meet these requirements it will be flagged in this table with a red X for non-compliance.  You can of-course change these values within EM by selecting the Database target and modifying the parameters in the spfile (and optionally, the run-time values if the parameter allows dynamic changes). The last region to call out on this page before moving on is the new look Repository Scheduler Job Status region. This region is an update of a similar region seen on previous releases of the MTM pages in Cloud Control but there’s some important new functionality that’s been added that customers have requested. First-up - Restarting Repository Jobs.  As you can see from the graphic, you can now optionally select a job (by selecting the row in the UI table element) and click on the Restart Job button to take care of any jobs which have stopped or stalled for any reason.  Previously this needed to be done at the command line using EMDIAG or through a PL/SQL package invocation.  You can now take care of this directly from within the UI. Next, you’ll see that a feature has been added to allow the EM administrator to customise the run-time for some of the background jobs that run in the Repository.  We heard from some customers that ensuring these jobs don’t clash with Production backups, etc is a key requirement.  This new functionality allows you to select the pencil icon to edit the schedule time for these more resource intensive background jobs and modify the schedule to avoid clashes like this. Moving onto the next tab, let’s select the Metrics tab. The Metrics Tab There’s some big changes here, this page contains new information regions that help the Administrator understand the direct impact the in-bound metric flows are having on the EM Repository.  Many customers have provided feedback that they are in the dark about the impact of adding new targets or large numbers of new hosts or new target types into EM and the impact this has on the Repository.  This page helps the EM Administrator get to grips with this.  Let’s take a quick look at two regions on this page. First-up there’s a bubble chart showing a comprehensive view of the top resource consumers of metric data, over the last 30 days, charted as the number of rows loaded against the number of collections for the metric.  The size of the bubble indicates a relative volume.  You can see from this example above that a quick glance shows that Host metrics are the largest inbound flow into the repository when measured by number of rows.  Closely following behind this though are a large number of collections for Oracle Weblogic Server and Application Deployment.  Taken together the Host Collections is around 0.7Mb of data.  The total information collection for Weblogic Server and Application Deployments is 0.38Mb and 0.37Mb respectively. If you want to get this information breakdown on the volume of data collected simply hover over the bubble in the chart and you’ll get a floating tooltip showing the information. Clicking on any bubble in the chart takes you one level deeper into a drill-down of the Metric collection. Doing this reveals the individual metric elements for these target types and again shows a representation of the relative cost - in terms of Number of Rows, Number of Collections and Storage cost of data for each Metric type. Looking at another panel on this page we can see a different view on this data. This view shows a view of the Top N metrics (the drop down allows you to select 10, 15 or 20) and sort them by volume of data.  In the case above we can see the largest metric collection (by volume) in this case (over the last 30 days) is the information about OS Registered Software on a Host target. Taken together, these two regions provide a powerful tool for the EM Administrator to understand the potential impact of any new targets that have been discovered and promoted into management by EM12c.  It’s a great tool for identifying the cause of a sudden increase in Repository storage consumption or Redo log and Archive log generation. Using the information on this page EM Administrators can take action to mitigate any load impact by deploying monitoring templates to the targets causing most load if appropriate.   The last tab we’ll look at on this page is the Schema tab. The Schema Tab Selecting this tab brings up a window onto the SYSMAN schema with a focus on Space usage in the EM Repository.  Understanding what tablespaces are growing, at what rate, is essential information for the EM Administrator to stay on top of managing space allocations for the EM Repository so that it works as efficiently as possible and performs well for the users.  Not least because ensuring storage is managed well ensures continued availability of EM for monitoring purposes. The first region to highlight here shows the trend of space usage for the tablespaces in the EM Repository over time.  You can see the upward trend here showing that storage in the EM Repository is being consumed on an upward trend over the last few days here. This is normal as this EM being used here is brand new with Agents being added daily to bring targets into monitoring.  If your Enterprise Manager configuration has reached a steady state over a period of time where the number of new inbound targets is relatively small, the metric collection settings are fairly uniform and standardised (using Templates and Template Collections) you’re likely to see a trend of space allocation that plateau’s. The table below the trend chart shows the Top 20 Tables/Indexes sorted descending by order of space consumed.  You can switch the trend view chart and corresponding detail table by choosing a different tablespace in the EM Repository using the drop-down picker on the top right of this region. The last region to highlight on this page is the region showing information about the Purge policies in effect in the EM Repository. This information is useful to illustrate to EM Administrators the default purge policies in effect for the different categories of information available in the EM Repository.  Of course, it’s also been a long requested feature to have the ability to modify these default retention periods.  You can also do this using this screen.  As there are interdependencies between some data elements you can’t modify retention policies on a feature by feature basis.  Instead, retention policies take categories of information and bundles them together in Groups.  Retention policies are modified at the Group Level.  Understanding the impact of this really deserves a blog post all on it’s own as modifying these can have a significant impact on both the EM Repository’s storage footprint and it’s performance.  For now, we’re just highlighting the features visibility on these new pages. As a user of EM12c we hope the new features you see here address some of the feedback that’s been given on these pages over the past few releases.  We’ll look out for any comments or feedback you have on these pages ! 

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • Unable to SSH into EC2 instance on Fedora 17

    - by abhishek
    I did following steps But I am not able to SSH to it(Same steps work fine on Fedora 14 image). I am getting Permission denied (publickey,gssapi-keyex,gssapi-with-mic) I created new instance using fedora 17 amazon community image(ami-2ea50247). I copied my ssh keys under /home/usertest/.ssh/ after creating a usertest I have SELINUX=disabled here is Debug info: $ ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com ssh -vvv ec2-54-243-101-41.compute-1.amazonaws.com OpenSSH_5.2p1, OpenSSL 1.0.0b-fips 16 Nov 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-243-101-41.compute-1.amazonaws.com [54.243.101.41] port 22. debug1: Connection established. debug1: identity file /home/usertest/.ssh/identity type -1 debug1: identity file /home/usertest/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/usertest/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/usertest/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 131/256 debug2: bits set: 506/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug3: check_host_in_hostfile: filename /home/usertest/.ssh/known_hosts debug3: check_host_in_hostfile: match line 17 debug1: Host 'ec2-54-243-101-41.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /home/usertest/.ssh/known_hosts:17 debug2: bits set: 500/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/usertest/.ssh/identity ((nil)) debug2: key: /home/usertest/.ssh/id_rsa ((nil)) debug2: key: /home/usertest/.ssh/id_dsa (0x7f904b5ae260) debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug3: Trying to reverse map address 54.243.101.41. debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug2: we did not send a packet, disable method debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /home/usertest/.ssh/identity debug3: no such identity: /home/usertest/.ssh/identity debug1: Trying private key: /home/usertest/.ssh/id_rsa debug3: no such identity: /home/usertest/.ssh/id_rsa debug1: Offering public key: /home/usertest/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey,gssapi-keyex,gssapi-with-mic).

    Read the article

  • CodePlex Daily Summary for Wednesday, June 16, 2010

    CodePlex Daily Summary for Wednesday, June 16, 2010New ProjectsAtomFeedBuilder: Simple and lightweight Atom feed builder. Developed in VB.Net.Cable and Wire harness tester: If you build lots of cable/wire harness' you know that testing them is a pain. I have wanted an automated cable tester for a while now but commerci...Carmenta Engine Power Pack: The target of Carmenta Engine Power Pack is to provide extensions, utilities and wrapper classes that allows developers to work more efficiently w...Customer Book: Customer Book, its like address book with facility for generating quotation for a business or a supplier to the clients.Dialector: Using this program, you can convert pure Turkish texts into different dialects; such as: Emmi, Kufurbaz, Kusdili, Laz, Peltek, Tiki, and many more....Downline Commision Generator: Analyze the compensations plan of the organizations in multi-level marketing or network marketing. Check with this tool the commision plan of the c...EmbeddedSpark 2010 Project M: Project M is a system for seamlessly interfacing a tabletop interface to portable devices placed upon it. Using image recognition and projectors, P...Event Log Creator by eVestment Alliance: Provides a simple utility to create a new source and log in the Windows event log. The utility checks if the current user is an administrator, and...ExchangeHog: Desktop/daemon application that aggregates emails from multiple pop3-accounts into single Microsoft Exchange 2010 account. For users receiving ema...Extra Time Calculator: Extra Time Calculator allows exam end times to be easily calculated for students receiving an extra time accommodation.Generic WCF Hosting Service: The Generic Host Service provides a simple, reusable, and reliable mechanism for hosting WCF services. Google Storage for .NET: Google Storage for .NET (GSN) is an open source library that provides .NET developers with easy access to the Google Storage API. The library allo...Helium: The Helium XNA game engine is a light portable game engine designed to work on many platforms and soon to be expanded on more. Currently the helium...IconizedButton Control Set: ASP.NET WebForms IconizedButton Custom Control Set. Replaces the dull Button/LinkButton/HyperLink controls with styling and left and right aligned...Jedi Council PM List: Allows for users to process Private Message Lists on the Jedi Council forums for TheForce.Net.JetPumpDesign: 本软件为蒸气喷射泵设计计算软件 作者:申阳 单位:西安交通大学过程装备与控制工程61班log4Nez: An high personalized implementation of a logging libraryMutantFramework: Provides a common set of building blocks for building enterprise applicationsNUnit Add-in for Growl Notifications: NUnit add-in which allows to send notifications to Growl when test run is started or finished, when a first test failure occurs and so on.Object Reports: Object Reports is a "proof of concept" application which provides users the ability to visualy build queries based on data stored in the relational...openTrionyx: openTrionyx is a set of tools to make easier web application development. Includes Data, Web and plain text documents tools. Developed in C#, compl...Partial Rendering control for MVC 2: This project shows a web custom control that allow to have partial rendering using async post-back (through JQuery) in a MVC 2 web application.PowerGUI Visual Studio Extension: The PowerGUI Visual Studio Extension exposes PowerGUI as an editor in Visual Studio. PowerShell developers can now write scripts directly in Visual...PowerShell Script Provider: Write your own PowerShell provider using only script, no C# required. Module definition is provided by a Windows PowerShell 2.0 Module, which may b...Scholar: Scholar is a solution/framework for .Net developers to help with the creation of distributed data processing (think SETI@home style apps). It is in...scrabb: Scrabb help people play scrabble over net.SharePointNuke: A DotNetNuke module that connects to a SharePoint server using web services API and displays the content of a specified list. SolidWorksBackConverter: a Project to Convert a solidwork file to an older version Soma - Sql Oriented MApping framework: Sql Oriented MApping framework.SPCreate: SPCreate auto store procedure creator. It's developed in c#. SpCreate as output ADO.NET Class (C# or VB.Net) and SQL Server or MS Access Store pro...std::streambuf wrapper for COM IStream: This provides a subclass of std::streambuf that wraps a COM IStream, so you can use an IStream with any C++ code that uses iostreams or the STL alg...VACID solutions: Solutions of verification problems posed in paper "Verification of Ample Correctness of Invariants of Data-structures". Developed with various tool...Viewer: Our Goal is to create a C# project that will centeralize Image and Movie Viewing in a forms application, It will also have a Specialized Webbrowser...vsXPathTester: vsXPathTester is a utility for Developer. This help them load XML file and the run their XPath Query. The Resultant is shown in window. It save the...New Releases.Net Max Framework: Version 1.0.0: Version 1.0.0 - EstableAndrew's XNA Helpers: V1.2: Features upgraded features based off of the V1.1 code for both X86 and XBOX Additions/Changes Reworked the Texture2D and Rectangle extender namesp...BaseCalendar: BaseControls 1.2: BaseControls 1.2 contains the BaseCalendar ASP.NET control. Changes: 1.2 Exposed EffectiveVisibleDate and FirstVisibleDay methods 1.1 Rendering ...Customer Book: Customer Book Code: Bronze Release PostgreSQL database dump for Customer Book. Open PgAdmin III and restore the database dump into your server. Notice User Name for t...Data Connection Suite: Data Connections Suite v1.0.0.0: This is the first release of this incomplete component, but good enought to use in a production environment (it's what we do).DigitArchive: Build 8: Now the software works on .NET 3.5 and above. So if you have Windows 7 it installs without any pre-requisites. Changes: -Works on .NET 3.5 -Now t...Doom 64 Ex (SVN Builds): Doom 64 Ex r-738: Finally a new build after so many months. There are way to many updates to even begin to write about here just download and frag away. There is a s...DotNetNuke® Media: 03.03.00a: This release is Beta!! There is no guaranteed upgrade path to the 03.03.00 release version! Please use this to help us and test what we have. Repor...Downline Commision Generator: Downline Commision Generator: Downline Commision GeneratorElmah2 : An extensable error logger for ASP.net: 1.0 Beta 1: This is a beta release be sure to report any errors etc. Be sure to check out the documentation tab on information on how to install and configure...EPiServer Template Foundation: First compiled release: First compiled release for experimenting only! :) An introductory post will be published shortly on the blog.Helium: Initial Release: This is the initial release of the Helium Engine. Please check out the documentation link for information on how to use the engine. To see a ful...IconizedButton Control Set: IconizedButton Control Set: Taking a line from Google's play book - marking everything as Beta. Seriously, I'd like to hear some feedback before moving the Development Status...JetPumpDesign: JetPumpDesign 1.0: 当前的软件可以设计5级以内的蒸汽喷射泵。Microsoft Silverlight Analytics Framework: Version 1.4.4 Installer: Tools TargetingVisual Studio 2010 Expression Blend 4 (part of Expression Studio 4) Analytics Services Included Vendor Behavior Silverlight 3...NHibernate Sidekick Library: 0.7.0: Added a few methods for use with the NHibernate 2nd level cache (EvictAllObjectsFromCache and EvictPersistentClass). I also added the boolean optio...NHibernate Sidekick Library: 0.7.5: Fix for http://nhprof.com/Learn/Alerts/DoNotUseImplicitTransactionsNito.KitchenSink: Version 9: Dependencies Nito.Linq 0.6 Beta (released 2010-06-14) Rx 1.0.2563.0 (released 2010-06-09) Supported Platforms .NET 4.0 Client Profile, with Rx. ...NQueue: Version 1.0.0.0: Version 1.0.0.0NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 0: The very first stable releasePartial Rendering control for MVC 2: Partial Rendering control for MVC 2: Here there is the source code and a MVC 2 web site as testPowerShell Script Provider: PSProvider 0.1: Requires PowerShell 2.0 RTM The functions in the attached ps1 script are the bare minimum for a working container-style provider (no subfolders.) ...Quick Performance Monitor: Version 1.4.3: Fixed issue where if an instance name contains backslash characters (\) the program would not load the performance counter properly. Also added sta...SharePointNuke: SharePointNuke 2.00.08: SharePointNuke 2.00.08 - Binary DotNetNuke 5.x module.Skype Voice Changer: 1.0 Updated Sample Code: This updated release is the accompanying code for the Skype Voice Changer article on Coding4Fun. Changes in this release: Added support for PreEmp...std::streambuf wrapper for COM IStream: Beta release (tested in a commercial project): This code has been tested in a custom Windows Search filter and property handler I wrote for a proprietary binary format. There may be some bugs, b...Sunlit World Scheme: Sunlit World Scheme - 20100615 - source and binary: This is the result of building the current source code in Debug mode. The source code is included. The binaries are in the SchemeCode folder along...Timo-Design / 40FINGERS DotNetNuke® Skinning Extensions: Style Helper Skin Object Beta: The 40FINGERS Style Helper Skin object allows you to add CSS and Javascript links and meta tags to the head of your page. It can also remove CSS l...Umbraco CMS: Umbraco 4.1 RC: This is the final test version of Umbraco 4.1 before the final release. PLEASE BE AWARE THAT UMBRACO 4.1 RC IS A .NET 4.0 RELEASE AND WON'T WORK O...VCC: Latest build, v2.1.30615.0: Automatic drop of latest buildWCF 4 Templates for Visual Studio 2010: UserNameForCertificate Template: Produces a WCF service application supporting username and password authentication, relying on message security to protect messages en route. Suppl...WCF 4 Templates for Visual Studio 2010: UserNameOverHttps Template: Produces a WCF service application supporting username and password authentication over HTTPS/SSL, relying on transport security to protect message...xUnit.net Contrib: xunitcontrib 0.4.1 alpha (ReSharper 5.1.1709 only): xunitcontrib release 0.4.1 (ReSharper runner) This release targets the current nightly build of ReSharper 5.1's Early Access Programme (build 1709)...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework).NET Transactional File ManagerSOLID by exampleSSIS Expression Editor & TesterWEI ShareChirpy - VS Add In For Handling Js, Css, and DotLess FilesASP.NET MVC Time PlannerMost Active ProjectsdotSpatialRhyduino - Arduino and Managed CodeCassandraemonpatterns & practices – Enterprise LibraryCommunity Forums NNTP bridgeLightweight Fluent Workflowpatterns & practices: Enterprise Library ContribNB_Store - Free DotNetNuke Ecommerce Catalog ModuleBlogEngine.NETjQuery Library for SharePoint Web Services

    Read the article

  • CodePlex Daily Summary for Saturday, October 22, 2011

    CodePlex Daily Summary for Saturday, October 22, 2011Popular ReleasesWatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.12.17: Changes Added FilePath Length Check when Uploading Files. Fixed Issue #6550 Fixed Issue #6536 Fixed Issue #6525 Fixed Issue #6500 Fixed Issue #6401 Fixed Issue #6490DotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.Manejo de tags - PHP sobre apache: tagqrphp: Primera version: Para que funcione el programa se debe primero obtener un id para desarrollo del tag eso lo entrega Microsoft registrandose en el sitio. http://tag.microsoft.com - En tagm.php que llama a la libreria Microsoft Tag PHP Library (Codigo que sirve para trabajar con PHP y Tag) - Llenamos los datos del formulario y ejecutamos para obtener el codigo tag de microsoft el cual apunte a la url que le indicamos en el formulario - Libreria MStag.php (tiene mucha explicación del funciona...GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.Self-Tracking Entity Generator for WPF and Silverlight: Self-Tracking Entity Generator v 0.9.9: Self-Tracking Entity Generator v 0.9.9 for Entity Framework 4.0Umbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.Naked Objects: Naked Objects Release 4.0.110.0: Corresponds to the packaged version 4.0.110.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Documentation Please note that after ...myCollections: Version 1.5: New in this version : Added edit type for selected elements Added clean for selected elements Added Amazon Italia Added Amazon China Added TVDB Italia Added TVDB China Added Turkish language You can now manually add artist Added Order by Rating Improved Add by Media Improved Artist Detail Upgrade Sqlite engine View, Zoom, Grouping, Filter are now saved by category Added group by Artist Added CubeCover View BugFixingFacebook C# SDK: 5.3: This is a BETA release which adds new features and bug fixes to v5.2.1. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ added support for early preview for .NET 4.5 added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added new CS-WinForms-AsyncAwait.sln sample demonstrating the use of async/await, upload progress report using IProgress<T> and cancellation support Query/QueryAsync methods uses graph api instead...IronPython: 2.7.1 RC: This is the first release candidate of IronPython 2.7.1. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. If there are no showstopping issues, this will be the only release candidate for 2.7.1, so please speak up if you run into any roadblocks. The highlights of 2.7.1 are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython To...Rawr: Rawr 4.2.6: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...Home Access Plus+: v7.5: Change Log: New Booking System (out of Beta) New Help Desk (out of Beta) New My Files (Developer Preview) Token now saved into Cookie so the system doesn't TIMEOUT as much File Changes: ~/bin/hap.ad.dll ~/bin/hap.web.dll ~/bin/hap.data.dll ~/bin/hap.web.configuration.dll ~/bookingsystem/admin/default.aspx ~/bookingsystem/default.aspx REMOVED ~/bookingsystem/bookingpopup.ascx REMOVED ~/bookingsystem/daylist.ascx REMOVED ~/bookingsystem/new.aspx ~/helpdesk/default.aspx ...Visual Micro - Arduino for Visual Studio: Arduino for Visual Studio 2008 and 2010: Arduino for Visual Studio 2010 has been extended to support Visual Studio 2008. The same functionality and configuration exists between the two versions. The 2010 addin runs .NET4 and the 2008 addin runs .NET3.5, both are installed using a single msi and both share the same configuration settings. The only known issue in 2008 is that the button and menu icons are missing. Please logon to the visual micro forum and let us know if things are working or not. Read more about this Visual Studio ...New Projects#foo Core: Core functionality extensions of .NET used by all HashFoo projects.#foo Nhib: #foo NHibernate extensions.Aagust G: Hello all ! Its a free JQuery Image Slider....ACP Log Analyzer: ACP Log Analyzer provides a quick and easy mechanism for generating a report on your ACP-based astronomical observing activities. Developed in Microsoft Visual Studio 2010 using C#, the .NET Framework version 4 and WPF.BlobShare Sample: TBDCompletedWorkflowCleanUp: This tool once executed on a list delete all completed workflow instancesCRM 2011 Visual Ribbon Editor: Visual Ribbon Editor is a tool for Microsoft Dynamics CRM 2011 that lets you edit CRM ribbons. This ribbon editor shows a preview of the CRM ribbon as you are editing it and allows you to add ribbon buttons and groups without needing to fully understand the ribbon XML schema.GearMerge: Organizes Movies and TV Series files from one Hard Drive to another. I created it for myself to update external drives with movies and TV shows from my collection.Generic Object Storage Helper for WinRT: ObjectStorageHelper<T> is a Generic class that simplifies storage of data in WinRT applications.Government Sanctioned Espionage RPG: Government Sanctioned is a modern SRD-like espionage game server. Visit http://wiki.government-sanctioned.us:8040 for game design and play information or homepage http://www.government-sanctioned.us Government Sanctioned is an online, text-based espionage RPG (similar to a MUD/MOO) that takes place against the backdrop of a highly-secretive U.S. Government agency whose stated goals don't always match the dirty work the agents tend to find themselves in. - over 15 starting profession...GridLibre para Visual FoxPro: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.HTML5 Video Web Part for SharePoint 2010: A web part for SharePoint 2010 that enable users playing video into the page using the Ribbon bar.Jogo do Galo: JOGO DO GALO REGRAS •O tabuleiro é a matriz de três linhas em três colunas. •Dois jogadores escolhem três peças cada um. •Os jogadores jogam alternadamente, uma peça de cada vez, num espaço que esteja vazio. •O objectivo é conseguir três peças iguais em linha, quer horizontal, vKarol sie uczy silverlajta: on sie naprawde tego uczy!Manejo de tags - PHP sobre apache: Hago uso de la libreria Microsoft Tag PHP Library para que pueda funcionar la aplicación sobre Apache finalmente puede crear tag de micrsosoft desde el formulario creado. Modem based SMS Gateway: It is an easy to use modem based SMS server that provide easier solutions for SMS marketing or SMS based services. It is highly programmable and the easy to use API interface makes SMS integration very easy. Embedded SMS processor provides customized solution to many of your needs even without building any custom software.Mund Publishing Feture: Mund Publishing FeatureMyTFSTest: TestNHS HL7 CDA Document Developer: A project to demonstrate how templated HL7 CDA documents can be created by using a common API. The API is designed to be used in .NET applications. A number of examples are provided using C#OpenShell: OpenShell is an open source implementation of the Windows PowerShell engine. It is to make integrating PowerShell into standalone console programs simple.Powershell Script to Copy Lists in a Site Collection in MOSS 2007 and SPS 2010: Hi, This is a powershell script file that copies a list within the same site collection. This works in Sharepoint 2007 and Sharepoint 2010 as well. THis will flash the messages before taking the input values. This will in this way provide the clear ideas about the values to beSharePoint Desktop: SharePoint Desktop is a explorer-like webpart that makes it possible to drag and drop items (documents and folders), copy and paste items and explore all SharePoint document libraries in 1 place.SQL floating point compare function: Comparison of floating point values in SQL Server not always gives the expected result. With this function, comparison is only done on the first 15 significant digits. Since SQL Server only garantees a precision of 15 digits for float datatypes, this is expected to be secure.Stock Analyzer: It is a stock management software. It's main job is to store market realtime data on a database to be able to analyse it latter and create automatic systems that operate directly on the stock exchange market. It will have different modules to do this task: - Realtime data capture. - Realtime data analysis - Historic analysis. - Order execution. - Strategy test. - Strategy execution. It's developed in C# and with WPF.VB_Skype: VB_Skype utilizza la libreria Skype4COM per integrare i servizi Skype con un'applicazione Visual Basic (Windows Forms). L'applicazione comprende un progetto di controllo personalizzato che costituisce il "wrapper" verso la libreria Skype4COM e un progetto con una demo di utilizzo. Progetto che dovrebbe essere utilizzato nella mia sessione, come uno degli speaker della conferenza "WPC 2011" che si terrà ad Assago (MI) nei giorni 22-23-24 Novembre 2011. La mia sessione è in agenda per il 24...Word Template Generator: Custom Template Generator for Microsoft Word, developed in C#?????OA??: ?????OA??

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

  • if I put accept all 0.0.0.0/0 means this server is totally open for any ip ?

    - by davyzhang
    ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 is this means allow all ip from all port? but I still can not visit the server except I go through the allowed ip address and if I put this line in any line, did I make this server totally open for any connection? the full iptable list is below Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- 116.211.25.89 0.0.0.0/0 ACCEPT all -- 222.215.136.8 0.0.0.0/0 ACCEPT all -- 125.82.87.21 0.0.0.0/0 ACCEPT all -- 127.0.0.1 127.0.0.1 ACCEPT tcp -- 61.172.251.109 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.254.123 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.129.44.191 0.0.0.0/0 ACCEPT tcp -- 61.129.44.128 0.0.0.0/0 ACCEPT tcp -- 61.172.251.109 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.254.123 0.0.0.0/0 tcp spt:8080 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 0 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:53 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:123 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:20 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:21 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:88 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:8000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:8888 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:873 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:6969 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:6900 ACCEPT tcp -- 61.172.241.98 0.0.0.0/0 ACCEPT tcp -- 61.172.247.98 0.0.0.0/0 ACCEPT tcp -- 61.172.247.100 0.0.0.0/0 ACCEPT tcp -- 61.152.122.33 0.0.0.0/0 ACCEPT tcp -- 61.152.110.130 0.0.0.0/0 ACCEPT tcp -- 210.51.28.220 0.0.0.0/0 ACCEPT tcp -- 210.51.28.120 0.0.0.0/0 ACCEPT tcp -- 61.172.241.120 0.0.0.0/0 ACCEPT tcp -- 211.147.0.85 0.0.0.0/0 ACCEPT tcp -- 211.147.0.114 0.0.0.0/0 ACCEPT tcp -- 222.73.61.249 0.0.0.0/0 ACCEPT tcp -- 222.73.61.250 0.0.0.0/0 ACCEPT tcp -- 222.73.61.251 0.0.0.0/0 ACCEPT tcp -- 210.51.31.11 0.0.0.0/0 tcp dpt:38422 ACCEPT tcp -- 210.51.31.12 0.0.0.0/0 tcp dpt:38422 ACCEPT tcp -- 61.172.254.123 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.251.109 0.0.0.0/0 tcp spt:8080 ACCEPT tcp -- 61.172.247.85 0.0.0.0/0 ACCEPT tcp -- 222.73.12.248 0.0.0.0/0 ACCEPT tcp -- 61.172.254.184 0.0.0.0/0 ACCEPT tcp -- 61.172.254.78 0.0.0.0/0 ACCEPT tcp -- 61.172.254.243 0.0.0.0/0 ACCEPT tcp -- 61.152.97.115 0.0.0.0/0 ACCEPT tcp -- 221.231.128.206 0.0.0.0/0 ACCEPT tcp -- 221.231.130.199 0.0.0.0/0 ACCEPT udp -- 172.0.0.0/8 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 10.0.0.0/8 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 192.168.0.0/16 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 61.172.252.58 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 61.183.13.201 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.2.11 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 221.208.157.158 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 218.30.74.250 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 202.102.54.234 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 125.64.2.115 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.23.23 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 210.51.33.97 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 210.51.33.98 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.11.112 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.11.111 0.0.0.0/0 udp dpt:161 ACCEPT udp -- 222.73.11.89 0.0.0.0/0 udp spt:38514 DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:38423 REJECT tcp -- 0.0.0.0/0 0.0.0.0/0 reject-with tcp-reset DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT) target prot opt source destination DROP all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- 0.0.0.0/0 222.73.11.89 udp dpt:38514

    Read the article

  • SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

    - by pinaldave
    Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet. Usain Bolt – World’s Fastest Man “Can you teach me SQL Server Performance Tuning?” This is one of the most popular questions which I receive all the time. The answer is YES. I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job. Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place t one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best! There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me. Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners. SQL Server Performance: Introduction to Query Tuning  SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server. In this almost four hour course we cover following important concepts. Introduction 10:22 Execution Plan Basics 45:59 Essential Indexing Techniques 20:19 Query Design for Performance 50:16 Performance Tuning Tools 01:15:14 Tips and Tricks 25:53 Checklist: Performance Tuning 07:13 The duration of each module is mentioned besides the name of the module. SQL Server Performance: Indexing Basics This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes. In this almost two hour course we cover following important concepts. Introduction 02:03 Fundamentals of Indexing 22:21 Practical Indexing Implementation Techniques 37:25 Index Maintenance 16:33 Introduction to ColumnstoreIndex 08:06 Indexing Practical Performance Tips and Tricks 24:56 Checklist : Index and Performance 07:29 The duration of each module is mentioned besides the name of the module. SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. In this almost 2 hours and 15 minutes course we cover following important concepts. Introduction 00:54 Retrieving IDENTITY value using @@IDENTITY 08:38 Concepts Related to Identity Values 04:15 Difference between WHERE and HAVING 05:52 Order in WHERE clause 07:29 Concepts Around Temporary Tables and Table Variables 09:03 Are stored procedures pre-compiled? 05:09 UNIQUE INDEX and NULLs problem 06:40 DELETE VS TRUNCATE 06:07 Locks and Duration of Transactions 15:11 Nested Transaction and Rollback 09:16 Understanding Date/Time Datatypes 07:40 Differences between VARCHAR and NVARCHAR datatypes 06:38 Precedence of DENY and GRANT security permissions 05:29 Identify Blocking Process 06:37 NULLS usage with Dynamic SQL 08:03 Appendix Tips and Tricks with Tools 20:44 The duration of each module is mentioned besides the name of the module. SQL in Sixty Seconds You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them? Here are three of my latest videos: Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 Copy Column Headers from Resultset – SQL in Sixty Seconds #027 Effect of Collation on Resultset – SQL in Sixty Seconds #026 You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet! Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology, Video

    Read the article

  • Can connect to Samba server but cannot access shares?

    - by jlego
    I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. Needs to be able to share files with a PC running Windows 7 and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error 0x80070035 " Windows cannot access \SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error "The original item for ShareName cannot be found." When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? UPDATE 2: As the directory I am trying to share is actually an entire internal disk, I have tried to things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. UPDATE 3: Not sure exactly what I did, but now whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Any suggestions? UPDATE 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. UPDATE 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally) UPDATE 6: Figured it out. See answer below

    Read the article

  • Static background noise while using new headset Ubuntu 13.04

    - by ThundLayr
    Today I bought a new gaming headset (Gx-Gaming Lychas), and when I tried to record some gameplay-comentary I noticed that there always is a static background noise, I just recorded an example so you guys can listen it (no downloaded needed): http://www47.zippyshare.com/v/65167832/file.html I'm using Kubuntu 13.04 and Kernel version is 3.8.0-19, my laptop is an Acer Travelmate 5760Z, I tried tons of configurations on Alsamixer and none of them made result, I really need to get this working so any kind of help will be very aprecciated. cat /proc/asound/cards: 0 [PCH ]: HDA-Intel - HDA Intel PCH HDA Intel PCH at 0xc6400000 irq 44 cat /proc/asound/card0/codec#0 Codec: Conexant CX20588 Address: 0 AFG Function Id: 0x1 (unsol 1) Vendor Id: 0x14f1506c Subsystem Id: 0x10250574 Revision Id: 0x100003 No Modem Function Group found Default PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Default Amp-In caps: N/A Default Amp-Out caps: N/A State of AFG node 0x01: Power states: D0 D1 D2 D3 D3cold CLKSTOP EPSS Power: setting=D0, actual=D0 GPIO: io=4, o=0, i=0, unsolicited=1, wake=0 IO[0]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[1]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[2]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 IO[3]: enable=0, dir=0, wake=0, sticky=0, data=0, unsol=0 Node 0x10 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Headphone Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Headphone Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x4a 0x4a] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x11 [Audio Output] wcaps 0xc1d: Stereo Amp-Out R/L Control: name="Speaker Playback Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Control: name="Speaker Playback Switch", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-Out vals: [0x80 0x80] Converter: stream=8, channel=0 PCM: rates [0x560]: 44100 48000 96000 192000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x12 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x13 [Beep Generator Widget] wcaps 0x70000c: Mono Amp-Out Control: name="Beep Playback Volume", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Control: name="Beep Playback Switch", index=0, device=0 ControlAmp: chs=1, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x07, nsteps=0x07, stepsize=0x0f, mute=0 Amp-Out vals: [0x00] Node 0x14 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Control: name="Capture Volume", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Control: name="Capture Switch", index=0, device=0 ControlAmp: chs=3, dir=In, idx=0, ofs=0 Device: name="CX20588 Analog", type="Audio", device=0 Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x50 0x50] [0x80 0x80] [0x80 0x80] [0x80 0x80] Converter: stream=4, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x15 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x16 [Audio Input] wcaps 0x100d1b: Stereo Amp-In R/L Amp-In caps: ofs=0x4a, nsteps=0x50, stepsize=0x03, mute=1 Amp-In vals: [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] [0x4a 0x4a] Converter: stream=0, channel=0 SDI-Select: 0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x1]: PCM Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x17* 0x18 0x23 0x24 Node 0x17 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Control: name="Mic Boost Volume", index=0, device=0 ControlAmp: chs=3, dir=Out, idx=0, ofs=0 Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x04 0x04] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a 0x1b* 0x1d 0x1e Node 0x18 [Audio Selector] wcaps 0x30050d: Stereo Amp-Out Amp-Out caps: ofs=0x00, nsteps=0x04, stepsize=0x27, mute=0 Amp-Out vals: [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 4 0x1a* 0x1b 0x1d 0x1e Node 0x19 [Pin Complex] wcaps 0x400581: Stereo Control: name="Headphone Jack", index=0, device=0 Pincap 0x0000001c: OUT HP Detect Pin Default 0x04214040: [Jack] HP Out at Ext Right Conn = 1/8, Color = Green DefAssociation = 0x4, Sequence = 0x0 Pin-ctls: 0xc0: OUT HP Unsolicited: tag=01, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1a [Pin Complex] wcaps 0x400481: Stereo Control: name="Internal Mic Phantom Jack", index=0, device=0 Pincap 0x00001324: IN Detect Vref caps: HIZ 50 80 Pin Default 0x90a70130: [Fixed] Mic at Int N/A Conn = Analog, Color = Unknown DefAssociation = 0x3, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1b [Pin Complex] wcaps 0x400581: Stereo Control: name="Mic Jack", index=0, device=0 Pincap 0x00011334: IN OUT EAPD Detect Vref caps: HIZ 50 80 EAPD 0x0: Pin Default 0x04a19020: [Jack] Mic at Ext Right Conn = 1/8, Color = Pink DefAssociation = 0x2, Sequence = 0x0 Pin-ctls: 0x24: IN VREF_80 Unsolicited: tag=02, enabled=1 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1c [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00000014: OUT Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1d [Pin Complex] wcaps 0x400581: Stereo Pincap 0x00010034: IN OUT EAPD Detect EAPD 0x0: Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10* 0x11 Node 0x1e [Pin Complex] wcaps 0x400481: Stereo Pincap 0x00000024: IN Detect Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x1f [Pin Complex] wcaps 0x400501: Stereo Control: name="Speaker Phantom Jack", index=0, device=0 Pincap 0x00000010: OUT Pin Default 0x92170110: [Fixed] Speaker at Int Front Conn = Analog, Color = Unknown DefAssociation = 0x1, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x40: OUT Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11* Node 0x20 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x12 Node 0x21 [Audio Output] wcaps 0x611: Stereo Digital Converter: stream=0, channel=0 Digital: Digital category: 0x0 IEC Coding Type: 0x0 PCM: rates [0x160]: 44100 48000 96000 bits [0xe]: 16 20 24 formats [0x5]: PCM AC3 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x22 [Pin Complex] wcaps 0x400781: Stereo Digital Pincap 0x00000010: OUT Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Unsolicited: tag=00, enabled=0 Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 1 0x21 Node 0x23 [Pin Complex] wcaps 0x40040b: Stereo Amp-In Amp-In caps: ofs=0x00, nsteps=0x04, stepsize=0x2f, mute=0 Amp-In vals: [0x00 0x00] Pincap 0x00000020: IN Pin Default 0x40f001f0: [N/A] Other at Ext N/A Conn = Unknown, Color = Unknown DefAssociation = 0xf, Sequence = 0x0 Misc = NO_PRESENCE Pin-ctls: 0x00: Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Node 0x24 [Audio Mixer] wcaps 0x20050b: Stereo Amp-In Amp-In caps: ofs=0x4a, nsteps=0x4a, stepsize=0x03, mute=1 Amp-In vals: [0x00 0x00] [0x00 0x00] Power states: D0 D1 D2 D3 D3cold EPSS Power: setting=D0, actual=D0 Connection: 2 0x10 0x11 Node 0x25 [Vendor Defined Widget] wcaps 0xf00000: Mono

    Read the article

  • DNS server not functioning correctly

    - by Shamit Shrestha
    I have setup a DNS server which isnt working properly. My domain is accswift.com which has glued to two name servers ns1.accswift.com and ns2.accswift.com for the same IP address - 203.78.164.18. On domain end everything should be fine. Please check -http://www.intodns.com/accswift.com I am sure its the problem with the linux server. Can anyone help me find where the problem is for me? Below is the settings that I have in the server. ====================== DIG [root@accswift ~]# dig accswift.com ; << DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 << accswift.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 11275 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;accswift.com. IN A ;; ANSWER SECTION: accswift.com. 38400 IN A 203.78.164.18 ;; AUTHORITY SECTION: accswift.com. 38400 IN NS ns1.accswift.com. accswift.com. 38400 IN NS ns2.accswift.com. ;; ADDITIONAL SECTION: ns1.accswift.com. 38400 IN A 203.78.164.18 ns2.accswift.com. 38400 IN A 203.78.164.18 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Nov 6 20:12:16 2013 ;; MSG SIZE rcvd: 114 ============== IP Tables settings vi /etc/sysconfig/iptables *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT COMMIT Completed on Fri Sep 20 04:20:33 2013 Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT Completed Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT ====DNS settings vi /var/named/accswift.com.host $ttl 38400 @ IN SOA ns1.accswift.com. root.ns1.accswift.com. ( 1382936091 10800 3600 604800 38400 ) @ IN NS ns1.accswift.com. @ IN NS ns2.accswift.com. accswift.com. IN A 203.78.164.18 accswift.com. IN NS ns1.accswift.com. www.accswift.com. IN A 203.78.164.18 ftp.accswift.com. IN A 203.78.164.18 m.accswift.com. IN A 203.78.164.18 ns1 IN A 203.78.164.18 ns2 IN A 203.78.164.18 localhost.accswift.com. IN A 127.0.0.1 webmail.accswift.com. IN A 203.78.164.18 admin.accswift.com. IN A 203.78.164.18 mail.accswift.com. IN A 203.78.164.18 accswift.com. IN MX 5 mail.accswift.com. ====Named.conf vi /etc/named.conf options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; allow-recursion { localhost; 192.168.2.0/24; }; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; forward first; forwarders {192.168.1.1;}; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "accswift.com" { type master; file "/var/named/accswift.com.hosts"; allow-transfer { 127.0.0.1; localnets; 208.73.211.69; }; }; zone "ns1.accswift.com" { type master; file "/var/named/ns1.accswift.com.hosts"; }; ==================================== Can anybody find any flaw in this? I am still unable to reach accswift.com from any other ISP. But it is browsable from the same network though. Thanks in advance.

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • CodePlex Daily Summary for Monday, May 24, 2010

    CodePlex Daily Summary for Monday, May 24, 2010New Projects(SocketCoder) Full Silverlight Web Video/Voice Conference: Is an open source project to develop full Silverlight Web Video/Voice Conference System in C# .NETabc123: Prueba Archetype Programming Language: See http://dvanderboom.wordpress.comBusiness Process Automation (BPA): BPA is a project initiative to develop an ERP which will integrate with work flow of an organization. It is build on the concept that all business ...Content Rendering: Content Rendering is a .NET 3.5 string template engine. The program uses reflection, an extensibility API and a template document, which has a cust...DTA Output Renamer: The DTA Output Renamer takes recommendations from the SQL Server Database Tuning Advisor (DTA) and updates the names of the indexes/statistics to b...Flexible Editing Toolkit: The Flexible Editing Toolkit aims to enable users who have coding experience in .Net to write own editors/tools using an easy-to-use framework and ...Fluent NHibernate, MVC 2: Projeto ultra simples desenvolvido em Asp.Net MVC 2 com Fluent NHibernate. Foram utilizadas camadas baseadas no DDD. Sample project to test DDD arc...GravityGame: Proyecto del capítulo del IGDA en el Tec de Monterrey Campus Sonora Norte. El objetivo es la creación de un primer juego para conocer cuales son l...Partner Relationship Management (PRM) Accelerator for Microsoft Dynamics CRM: R2 of the PRM accelerator for Microsoft Dynamics CRM.Posh-Hg: Mercurial integration for Windows PowershellSuiteMap: selfSvenska till Rövarspråket: Översätter svenska till rövarspråketTV4Home: This projects extends MediaPortal TV Server with a solution for MediaCenter clients, a Web-Interface and a WHS Add-In.User Profile WebPart: Use this webpart to display SharePoint User Profile information. VolumeMasterCmd: Command line application that will set the wave volume level. Usage: VolumeMaster | VolumeMaster [0-100] Display or set the wave volume level ...wawa cloud store service: 盖茨描述了云计算和云存储之间的区别。“人们老是会搞混。云存储是把你的文件存储到其他地方,进行备份,这和云计算是不同的。这两者都很了不起,都是很好的技术。” 他表示,“云存储的效能没有折扣,因此富于理性的存储经理将会考虑使用云存储技术。” 相反,他表示云计算则会有一些问题,延迟和带宽都可能会...XNA Collision Detection: A collision library which extracts the triangles from a given model, and tests for collision using multiple methods on all existing triangles. Thi...You Private Social Network: YourPrivateNet is for all people who are unsatisfied with how social networking giants, namely facebook, are dealing with privacy and the users dat...New ReleasesArchetype Programming Language: C Sharp 4.0 Grammar in M: This is a C# 4.0 grammar that I am using to learn about parsers and the process of generating ASTs, in preparation for doing the same for the Arche...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1.4 and 4.0.1.4 beta3: Binary release includes: .net 3.5sp1 and 4.0 builds of Gui app, Console app, Library assembly and Visual Studio development server replacements f...ClosedXML - The easy way to OpenXML: ClosedXML 0.10: The current build has the following capabilities: Can create new workbooks Add worksheets Access cells using R1C1, A1, and mixed notations. A...CRM Web Service Toolkit: MSCRM4 Web Service Toolkit for JavaScript v2.0: MSCRM Web Service Toolkit for JavaScript v2.0. The release contains: CrmServiceToolkit.js (The uncompressed code) CrmServiceToolkit.min.js (The ...DBFramework: Kenly.DBFramework4.6.5.2: Kenly.DBFramework4.6.5.2eComic: eComic 2010.0.0.1: With the release of .NET 4.0 the system was upgraded. This upgrade involved starting the project over from scratch, so the installation package wil...Exchange 2010 RBAC Editor (RBAC GUI) - updated on 5/24/2010: RBAC Editor 0.9.3.1: Some bugs fixed, support added for unscopedtoplevel roles (still in progress), added logging capabilities Please use email address in About menu o...Hammock for REST: Hammock v1.0.4: v1.0.4 ChangesAdded handling for special characters in OAuth signatures (\r\n\t\b) Corrected an inconsistency in OAuth GET vs. POST when encoding...HKGolden Express: HKGoldenExpress (Build 201005231730): New features: (None) Bug fix: Fixed problem of unable to start new thread. Fixed problem of unable to show user icons due to incorrect path. ...Kurumsal Ofis Paketi: Kurumsal Ofis Paketi Sürüm 1.0: Kurumsal Ofis Paketi Sürüm 1.0MDownloader: MDownloader-0.15.15.59175: Fixed FileFactory implementation (FileFactory team doesn't give up); Fixed minor bugs.Munq: Tools for ASP.NET MVC: Munq IocContainer Version 2.0: The latest and greatest.NLog - Advanced .NET Logging: Nightly Build 2010.05.23.001: Changes since the last build:2010-05-23 00:01:19 Jarek Kowalski Made contition on <when /> required. Added unit tests. 2010-05-22 20:06:16 Jarek K...NUnit for Team Build: Version 2.0 Alpha 1: This version has experimental TFS 2010 support. I have been using it successfully with TFS 2010 for a few weeks now with no problems. It should be ...Partner Relationship Management (PRM) Accelerator for Microsoft Dynamics CRM: PRM Accelerator (R2) for Dynamics CRM 4.0: The Partner Relationship Management (PRM) Accelerator allows businesses to use Microsoft Dynamics CRM to distribute sales leads and centrally manag...Percussion Toolkit: Command line Note Detector 1.0: A command line tool for detecting note onsets in WAV files. Note: Currently only supports 32-bit float encoded Microsoft WAV files with a sample ...Percussion Toolkit: Reference Input Data Set (A): This release contains a set of input WAV files for testing note onset detection accuracy and effectiveness. The archive contains computer-generate...Percussion Toolkit: Rhythm Friend: Rhythm Friend is an interactive tool for practicing drum rudiments. It provides a simple metronome coupled with rhythm coach features.SharePoint List Field Manager: SharePoint List Field Manager: This is Version 2 of the SharePoint List Field Manager. First created for CorasWorks customers, we have decided to make it publically available to ...Silverlight 4.0 Popup Menu: Context Menu for Silverlight 4.0 v1.2 Beta: - Added delay on hover events for both parent and child menus. - Parent menus now close automatically when child menu is clicked. - Updated referen...Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+03: Sw. Is Hw. Lib. 3.0.0.x+03 UNSUPPORTED, UNTESTED ALPHA RELEASE Code may disappear. This is just a preview of code that was in progress. Code is s...The Fastcopy Helper: The Fastcopy Helper 2.0: The Fastcopy Helper 2.0 This is a new method to run for it . Fastcopy同步辅助器 方案二 这个方案二,是使用了全新的方法进行磁盘文件扫描,和比较。速度飙升到最快!同时优化了很多细节的内容,使得性能大幅度提升。User Profile WebPart: UserProfileWebPart 1.0: Inititial release.VCC: Latest build, v2.1.30523.0: Automatic drop of latest buildVolumeMasterCmd: VolumeMaster 1.0: First release.XNA Collision Detection: XNA Static Collider: Provides collision detection between a Model and BoundingSphere, or a Model and Ray. An example of how to initialize a collision object: Collidee...xxfd1r4w96: 20100523: Приложението има следната функционалност: 1. Добавяне на сайта https://online.bulbank.bg в trusted sites. 2. Инсталиране на Bulbank Root Certificat...Yet another developer blog - Examples: Asynchronous Form in ASP.NET MVC: This sample application shows how to use jQuery Validation plugin for creating an asynchronous form in ASP.NET MVC 1 with client side validation. T...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsRawrpatterns & practices – Enterprise LibrarySqlServerExtensionsGMap.NET - Great Maps for Windows Forms & PresentationCaliburn: An Application Framework for WPF and Silverlightpatterns & practices: Windows Azure Security GuidanceCassiniDev - Cassini 3.5/4.0 Developers EditionNB_Store - Free DotNetNuke Ecommerce Catalog ModuleCodeReviewBlogEngine.NET

    Read the article

  • Adding RSS to tags in Orchard

    - by Bertrand Le Roy
    A year ago, I wrote a scary post about RSS in Orchard. RSS was one of the first features we implemented in our CMS, and it has stood the test of time rather well, but the post was explaining things at a level that was probably too abstract whereas my readers were expecting something a little more practical. Well, this post is going to correct this by showing how I built a module that adds RSS feeds for each tag on the site. Hopefully it will show that it's not very complicated in practice, and also that the infrastructure is pretty well thought out. In order to provide RSS, we need to do two things: generate the XML for the feed, and inject the address of that feed into the existing tag listing page, in order to make the feed discoverable. Let's start with the discoverability part. One might be tempted to replace the controller or the view that are responsible for the listing of the items under a tag. Fortunately, there is no need to do any of that, and we can be a lot less obtrusive. Instead, we can implement a filter: public class TagRssFilter : FilterProvider, IResultFilter .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } On this filter, we can implement the OnResultExecuting method and simply check whether the current request is targeting the list of items under a tag. If that is the case, we can just register our new feed: public void OnResultExecuting(ResultExecutingContext filterContext) { var routeValues = filterContext.RouteData.Values; if (routeValues["area"] as string == "Orchard.Tags" && routeValues["controller"] as string == "Home" && routeValues["action"] as string == "Search") { var tag = routeValues["tagName"] as string; if (! string.IsNullOrWhiteSpace(tag)) { var workContext = _wca.GetContext(); _feedManager.Register( workContext.CurrentSite + " – " + tag, "rss", new RouteValueDictionary { { "tag", tag } } ); } } } The registration of the new feed is just specifying the title of the feed, its format (RSS) and the parameters that it will need (the tag). _wca and _feedManager are just instances of IWorkContextAccessor and IFeedManager that Orchard injected for us. That is all that's needed to get the following tag to be added to the head of our page, without touching an existing controller or view: <link rel="alternate" type="application/rss+xml" title="VuLu - Science" href="/rss?tag=Science"/> Nifty. Of course, if we navigate to the URL of that feed, we'll get a 404. This is because no implementation of IFeedQueryProvider knows about the tag parameter yet. Let's build one that does: public class TagFeedQuery : IFeedQueryProvider, IFeedQuery IFeedQueryProvider has one method, Match, that we can implement to take over any feed request that has a tag parameter: public FeedQueryMatch Match(FeedContext context) { var tagName = context.ValueProvider.GetValue("tag"); if (tagName == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } This is just saying that if there is a tag parameter, we will handle it. All that remains to be done is the actual building of the feed now that we have accepted to handle it. This is done by implementing the Execute method of the IFeedQuery interface: public void Execute(FeedContext context) { var tagValue = context.ValueProvider.GetValue("tag"); if (tagValue == null) return; var tagName = (string)tagValue.ConvertTo(typeof (string)); var tag = _tagService.GetTagByName(tagName); if (tag == null) return; var site = _services.WorkContext.CurrentSite; var link = new XElement("link"); context.Response.Element.SetElementValue("title", site.SiteName + " - " + tagName); context.Response.Element.Add(link); context.Response.Element.SetElementValue("description", site.SiteName + " - " + tagName); context.Response.Contextualize(requestContext => link.Add(GetTagUrl(tagName, requestContext))); var items = _tagService.GetTaggedContentItems(tag.Id, 0, 20); foreach (var item in items) { context.Builder.AddItem(context, item.ContentItem); } } This code is resolving the tag content item from its name and then gets content items tagged with it, using the tag services provided by the Orchard.Tags module. Then we add those items to the feed. And that is it. To summarize, we handled the request unobtrusively in order to inject the feed's link, then handled requests for feeds with a tag parameter and generated the list of items for that tag. It remains fairly simple and still it is able to handle arbitrary content types. That makes me quite happy about our little piece of over-engineered code from last year. The full code for this can be found in the Vandelay.TagCloud module: http://orchardproject.net/gallery/List/Modules/ Orchard.Module.Vandelay.TagCloud/1.2

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >