Search Results

Search found 14311 results on 573 pages for 'stan note'.

Page 536/573 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • CodePlex Daily Summary for Friday, May 23, 2014

    CodePlex Daily Summary for Friday, May 23, 2014Popular Releasesbabelua: 1.5.5: V1.5.5 - 2014.5.23New feature: support lua5.1 keywords auto completion; debug message would write to output window now; editor outline combobox’s members now will sorting by the first letter; Stability improvement: fix a bug that when search in a file which not exists in current setting folder , result of switch relative path function would not correct; Some other bug fix;DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...String.Format Diagnostic (Roslyn): Diagnostic Format String (v1,3): In this release the tool is now capable of reporting multiple issues contained within the text of a formatstring. v1.3 Extends these capability to include if the formatstring argument is a String Constant. Validation Rules Supported Are the Argument Index supplied within range, of those supplied? Is the Argument Index less than the limit of 1000000 (This is defined inside of the .net framework's implementation) Is the Alignment with less than the limit of 1000 000 (This is define insid...Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.OSGi.NET: Asp.net MVC 4.0 integration v2.2: + Support AreaRegistrationPowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.comConEmu - Windows console with tabs: ConEmu 140522 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" WebExtras: v1.4.0-Beta-1: Enh: Adding support for jQuery UI framework Enh: Adding support for jqPlot charting library Dropping dependency on MoreLinq library Note: Html.LabelForV2(...) extension method has now been deprecated. You should use Html.RequiredFieldLabelFor(...) extension method instead. This extension method will be removed in future versions.????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.Wsus Package Publisher: Release v1.3.1405.17: Add Russian translation (thanks to VSharmanov) Fix a bug that make WPP to crash if the user click on "Connect/Reload" while the Report Tab is loading. Enhance the way WPP store the password for remote computers command.MoreTerra (Terraria World Viewer): More Terra 1.12.9: =========== = Compatibility = =========== Updated to account for new format 1.2.4.1 =========== = Issues = =========== all items have not been added. Some colors for new tiles may be off. I wanted to get this out so people have a usable program.LINQ to Twitter: LINQ to Twitter v3.0.3: Supports .NET 4.5x, Windows Phone 8.x, Windows 8.x, Windows Azure, Xamarin.Android, and Xamarin.iOS. New features include Status/Lookup, Mute APIs, and bug fixes. 100% Twitter API v1.1 coverage, Async, Portable Class Library (PCL).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.26.0: Added access to the Release Notes during 'Check for Updates...'' Debug panels Added support for generic types members Members are grouped into 'Raw View' and 'Non-Public members' categories Implemented dedicated (array-like) view for Lists and Dictionaries http://download-codeplex.sec.s-msft.com/Download?ProjectName=csscriptnpp&DownloadId=846498ClosedXML - The easy way to OpenXML: ClosedXML 0.70.0: A lot of fixes. See history.TBox - tool to make developer's life easier.: TBox 1.29: Bug fixing. Add LocalizationTool pluginYAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib 2.13: Fixed a bug and added unit tests related to serializing path like aliases with one letter (e.g., './B'). Thanks go to CodeProject user B.O.B. for reporting this bug. Added `Bin/*.dll.mdb` to `.gitignore`. Fixed the issue with Indexer properties. Indexers must not be serialized/deserialized. YAXLib will ignore delegate (callback/function pointer) properties, so that the exception upon serializing them is prevented. Significant improve in cycling object reference detection Self Referr...SFDL.NET: SFDL.NET (2.2.9.2): Changelog: Neues Icon Xup.in CnL Plugin BugfixSEToolbox: SEToolbox 01.030.008 Release 1: Fixed cube editor failing to apply color to cubes. Added to cube editor, replace cube dialog, and Build Percent dialog. Corrected for hidden asteroid ore, allowing rare ore to show when importing an asteroid, or converting a 3d model to an asteroid (still appears to be limitations on rare ore in small asteroids). Allowed ore selection to Asteroid file import. (Can copy/import and convert existing asteroid to another ore). Added progress bars to common long running operations. Fixed ...New ProjectsBlueCurve Search: BlueCurve is a small experimental search engine, implemented on top of Lucene. The goal is to test ways to create a powerfull search engine system framework.C64 Studio: C64 Studio is a .NET based IDE. The program supports project based C64 assembly and/or Basic V2 and is geared towards game development.ExpressToAbroad: ExpressToAbroadFarragoJS: A set of simple JavaScript functions: FarragoJS is a set of simple JavaScript functions offering features ranging from useful to totally pointless! But hey, there's a use for everything, right?fischetest: nothingIsDrone: Simple windows client for Parrot ar Drone. ??????? ?????? ??? Parrot ar Drone ??? Windows. Jewelry: this is project about auto generate codeMRBrowserLibrary: webbrowser control libraryQuan Ly Phong Tro DevExpress and LinQ: Quan Ly Phong Tro DevExpress + LinQUseless Games: JavaScript games written for practicing this wonderful language.?????-?????【??】?????????: ?????1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ?????1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ???????2005?,????????????????????,??????????,??????。?????,???????????????????,??????! ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ???????2005?,????????????????????,??????????,??????。?????,???????????????????,??????! ?????-?????【??】?????????: ???????????????:??????!?????!???:????、????、????、????。??,??????????!??????. ??????-??????【??】??????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ???????????????:??????!?????!???:????、????、????、????。??,??????????!??????. ?????-?????【??】?????????: ???????????????????????,????,????“???、???、???”?????,?????,?????????????????。??????! ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ???????????????:??????!?????!???:????、????、????、????。??,??????????!??????. ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ???????2005?,????????????????????,??????????,??????。?????,???????????????????,??????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????! ?????-?????【??】?????????: ??????????????????,???,??????????、???????????????????。??????,????、????,??????! ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ???????????????????????,????,????“???、???、???”?????,?????,?????????????????。??????! ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ?????1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ???????2005?,????????????????????,??????????,??????。?????,???????????????????,??????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????! ?????-?????【??】?????????: ???????????????,??????,??????,??????、??????,??????、??,????,??????! ??????-??????【??】??????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ???????????????:??????!?????!???:????、????、????、????。??,??????????!??????. ??????-??????【??】??????????: ????????,???????????,??????????,????:??,????,???????? ??????????,????????。??????! ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【???????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ??????????????,????????????,????????,???,???????????,????,????。?????,??????. ?????-?????【??】?????????: ??????????????????,???,??????????、???????????????????。??????,????、????,??????! ?????-?????【??】?????????: ???????????????????????,???????????,??????,??????????????...????????。??????! ?????-?????【??】?????????: ????????????,????,?????、???、?????,???????,?????,???????????100%。??????! ?????-?????【??】?????????: ?????1992?,????????????????。??????????????????????。????????????,????,????????! ?????-?????【??】?????????: ??????、??????????????????,???????.??????????,????????。 ?????-?????【??】?????????: ???????2005?,????????????????????,??????????,??????。?????,???????????????????,??????! ?????-?????【??】?????????: ?????????????????????,??????,???????????,????????????????,????????.??????. ?????-?????【??】?????????: ??????????????????,???,??????????、???????????????????。??????,????、????,??????!

    Read the article

  • Developing a Cost Model for Cloud Applications

    - by BuckWoody
    Note - please pay attention to the date of this post. As much as I attempt to make the information below accurate, the nature of distributed computing means that components, units and pricing will change over time. The definitive costs for Microsoft Windows Azure and SQL Azure are located here, and are more accurate than anything you will see in this post: http://www.microsoft.com/windowsazure/offers/  When writing software that is run on a Platform-as-a-Service (PaaS) offering like Windows Azure / SQL Azure, one of the questions you must answer is how much the system will cost. I will not discuss the comparisons between on-premise costs (which are nigh impossible to calculate accurately) versus cloud costs, but instead focus on creating a general model for estimating costs for a given application. You should be aware that there are (at this writing) two billing mechanisms for Windows and SQL Azure: “Pay-as-you-go” or consumption, and “Subscription” or commitment. Conceptually, you can consider the former a pay-as-you-go cell phone plan, where you pay by the unit used (at a slightly higher rate) and the latter as a standard cell phone plan where you commit to a contract and thus pay lower rates. In this post I’ll stick with the pay-as-you-go mechanism for simplicity, which should be the maximum cost you would pay. From there you may be able to get a lower cost if you use the other mechanism. In any case, the model you create should hold. Developing a good cost model is essential. As a developer or architect, you’ll most certainly be asked how much something will cost, and you need to have a reliable way to estimate that. Businesses and Organizations have been used to paying for servers, software licenses, and other infrastructure as an up-front cost, and power, people to the systems and so on as an ongoing (and sometimes not factored) cost. When presented with a new paradigm like distributed computing, they may not understand the true cost/value proposition, and that’s where the architect and developer can guide the conversation to make a choice based on features of the application versus the true costs. The two big buckets of use-types for these applications are customer-based and steady-state. In the customer-based use type, each successful use of the program results in a sale or income for your organization. Perhaps you’ve written an application that provides the spot-price of foo, and your customer pays for the use of that application. In that case, once you’ve estimated your cost for a successful traversal of the application, you can build that into the price you charge the user. It’s a standard restaurant model, where the price of the meal is determined by the cost of making it, plus any profit you can make. In the second use-type, the application will be used by a more-or-less constant number of processes or users and no direct revenue is attached to the system. A typical example is a customer-tracking system used by the employees within your company. In this case, the cost model is often created “in reverse” - meaning that you pilot the application, monitor the use (and costs) and that cost is held steady. This is where the comparison with an on-premise system becomes necessary, even though it is more difficult to estimate those on-premise true costs. For instance, do you know exactly how much cost the air conditioning is because you have a team of system administrators? This may sound trivial, but that, along with the insurance for the building, the wiring, and every other part of the system is in fact a cost to the business. There are three primary methods that I’ve been successful with in estimating the cost. None are perfect, all are demand-driven. The general process is to lay out a matrix of: components units cost per unit and then multiply that times the usage of the system, based on which components you use in the program. That sounds a bit simplistic, but using those metrics in a calculation becomes more detailed. In all of the methods that follow, you need to know your application. The components for a PaaS include computing instances, storage, transactions, bandwidth and in the case of SQL Azure, database size. In most cases, architects start with the first model and progress through the other methods to gain accuracy. Simple Estimation The simplest way to calculate costs is to architect the application (even UML or on-paper, no coding involved) and then estimate which of the components you’ll use, and how much of each will be used. Microsoft provides two tools to do this - one is a simple slider-application located here: http://www.microsoft.com/windowsazure/pricing-calculator/  The other is a tool you download to create an “Return on Investment” (ROI) spreadsheet, which has the advantage of leading you through various questions to estimate what you plan to use, located here: https://roianalyst.alinean.com/msft/AutoLogin.do?d=176318219048082115  You can also just create a spreadsheet yourself with a structure like this: Program Element Azure Component Unit of Measure Cost Per Unit Estimated Use of Component Total Cost Per Component Cumulative Cost               Of course, the consideration with this model is that it is difficult to predict a system that is not running or hasn’t even been developed. Which brings us to the next model type. Measure and Project A more accurate model is to actually write the code for the application, using the Software Development Kit (SDK) which can run entirely disconnected from Azure. The code should be instrumented to estimate the use of the application components, logging to a local file on the development system. A series of unit and integration tests should be run, which will create load on the test system. You can use standard development concepts to track this usage, and even use Windows Performance Monitor counters. The best place to start with this method is to use the Windows Azure Diagnostics subsystem in your code, which you can read more about here: http://blogs.msdn.com/b/sumitm/archive/2009/11/18/introducing-windows-azure-diagnostics.aspx This set of API’s greatly simplifies tracking the application, and in fact you can use this information for more than just a cost model. After you have the tracking logs, you can plug the numbers into ay of the tools above, which should give a representative cost or in some cases a unit cost. The consideration with this model is that the SDK fabric is not a one-to-one comparison with performance on the actual Windows Azure fabric. Those differences are usually smaller, but they do need to be considered. Also, you may not be able to accurately predict the load on the system, which might lead to an architectural change, which changes the model. This leads us to the next, most accurate method for a cost model. Sample and Estimate Using standard statistical and other predictive math, once the application is deployed you will get a bill each month from Microsoft for your Azure usage. The bill is quite detailed, and you can export the data from it to do analysis, and using methods like regression and so on project out into the future what the costs will be. I normally advise that the architect also extrapolate a unit cost from those metrics as well. This is the information that should be reported back to the executives that pay the bills: the past cost, future projected costs, and unit cost “per click” or “per transaction”, as your case warrants. The challenge here is in the model itself - statistical methods are not foolproof, and the larger the sample (in this case I recommend the entire population, not a smaller sample) is key. References and Tools Articles: http://blogs.msdn.com/b/patrick_butler_monterde/archive/2010/02/10/windows-azure-billing-overview.aspx http://technet.microsoft.com/en-us/magazine/gg213848.aspx http://blog.codingoutloud.com/2011/06/05/azure-faq-how-much-will-it-cost-me-to-run-my-application-on-windows-azure/ http://blogs.msdn.com/b/johnalioto/archive/2010/08/25/10054193.aspx http://geekswithblogs.net/iupdateable/archive/2010/02/08/qampa-how-can-i-calculate-the-tco-and-roi-when.aspx   Other Tools: http://cloud-assessment.com/ http://communities.quest.com/community/cloud_tools

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #037

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Convert Text to Numbers (Integer) – CAST and CONVERT If table column is VARCHAR and has all the numeric values in it, it can be retrieved as Integer using CAST or CONVERT function. List All Stored Procedure Modified in Last N Days If SQL Server suddenly start behaving in un-expectable behavior and if stored procedure were changed recently, following script can be used to check recently modified stored procedure. If a stored procedure was created but never modified afterwards modified date and create a date for that stored procedure are same. Count Duplicate Records – Rows Validate Field For DATE datatype using function ISDATE() We always checked DATETIME field for incorrect data type. One of the user input date as 30/2/2007. The date was sucessfully inserted in the temp table but while inserting from temp table to final table it crashed with error. We had now task to validate incorrect date value before we insert in final table. Jr. Developer asked me how can he do that? We check for incorrect data type (varchar, int, NULL) but this is incorrect date value. Regular expression works fine with them because of mm/dd/yyyy format. 2008 Find Space Used For Any Particular Table It is very simple to find out the space used by any table in the database. Two Convenient Features Inline Assignment – Inline Operations Here is the script which does both – Inline Assignment and Inline Operation DECLARE @idx INT = 0 SET @idx+=1 SELECT @idx Introduction to SPARSE Columns SPARSE column are better at managing NULL and ZERO values in SQL Server. It does not take any space in database at all. If column is created with SPARSE clause with it and it contains ZERO or NULL it will be take lesser space then regular column (without SPARSE clause). SP_CONFIGURE – Displays or Changes Global Configuration Settings If advanced settings are not enabled at configuration level SQL Server will not let user change the advanced features on server. Authorized user can turn on or turn off advance settings. 2009 Standby Servers and Types of Standby Servers Standby Server is a type of server that can be brought online in a situation when Primary Server goes offline and application needs continuous (high) availability of the server. There is always a need to set up a mechanism where data and objects from primary server are moved to secondary (standby) server. BLOB – Pointer to Image, Image in Database, FILESTREAM Storage When it comes to storing images in database there are two common methods. I had previously blogged about the same subject on my visit to Toronto. With SQL Server 2008, we have a new method of FILESTREAM storage. However, the answer on when to use FILESTREAM and when to use other methods is still vague in community. 2010 Upper Case Shortcut SQL Server Management Studio I select the word and hit CTRL+SHIFT+U and it SSMS immediately changes the case of the selected word. Similar way if one want to convert cases to lower case, another short cut CTRL+SHIFT+L is also available. The Self Join – Inner Join and Outer Join Self Join has always been a noteworthy case. It is interesting to ask questions about self join in a room full of developers. I often ask – if there are three kinds of joins, i.e.- Inner Join, Outer Join and Cross Join; what type of join is Self Join? The usual answer is that it is an Inner Join. However, the reality is very different. Parallelism – Row per Processor – Row per Thread – Thread 0  If you look carefully in the Properties window or XML Plan, there is “Thread 0?. What does this “Thread 0” indicate? Well find out from the blog post. How do I Learn and How do I Teach The blog post has raised three very interesting questions. How do you learn? How do you teach? What are you learning or teaching? Let me try to answer the same. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 7 of 31 What are Different Types of Locks? What are Pessimistic Lock and Optimistic Lock? When is the use of UPDATE_STATISTICS command? What is the Difference between a HAVING clause and a WHERE clause? What is Connection Pooling and why it is Used? What are the Properties and Different Types of Sub-Queries? What are the Authentication Modes in SQL Server? How can it be Changed? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 8 of 31 Which Command using Query Analyzer will give you the Version of SQL Server and Operating System? What is an SQL Server Agent? Can a Stored Procedure call itself or a Recursive Stored Procedure? How many levels of SP nesting is possible? What is Log Shipping? Name 3 ways to get an Accurate Count of the Number of Records in a Table? What does it mean to have QUOTED_IDENTIFIER ON? What are the Implications of having it OFF? What is the Difference between a Local and a Global Temporary Table? What is the STUFF Function and How Does it Differ from the REPLACE Function? What is PRIMARY KEY? What is UNIQUE KEY Constraint? What is FOREIGN KEY? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 9 of 31 What is CHECK Constraint? What is NOT NULL Constraint? What is the difference between UNION and UNION ALL? What is B-Tree? How to get @@ERROR and @@ROWCOUNT at the Same Time? What is a Scheduled Job or What is a Scheduled Task? What are the Advantages of Using Stored Procedures? What is a Table Called, if it has neither Cluster nor Non-cluster Index? What is it Used for? Can SQL Servers Linked to other Servers like Oracle? What is BCP? When is it Used? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 10 of 31 What Command do we Use to Rename a db, a Table and a Column? What are sp_configure Commands and SET Commands? How to Implement One-to-One, One-to-Many and Many-to-Many Relationships while Designing Tables? What is Difference between Commit and Rollback when Used in Transactions? What is an Execution Plan? When would you Use it? How would you View the Execution Plan? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 11 of 31 What is Difference between Table Aliases and Column Aliases? Do they Affect Performance? What is the difference between CHAR and VARCHAR Datatypes? What is the Difference between VARCHAR and VARCHAR(MAX) Datatypes? What is the Difference between VARCHAR and NVARCHAR datatypes? Which are the Important Points to Note when Multilanguage Data is Stored in a Table? How to Optimize Stored Procedure Optimization? What is SQL Injection? How to Protect Against SQL Injection Attack? How to Find Out the List Schema Name and Table Name for the Database? What is CHECKPOINT Process in the SQL Server? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 12 of 31 How does Using a Separate Hard Drive for Several Database Objects Improves Performance Right Away? How to Find the List of Fixed Hard Drive and Free Space on Server? Why can there be only one Clustered Index and not more than one? What is Difference between Line Feed (\n) and Carriage Return (\r)? Is It Possible to have Clustered Index on Separate Drive From Original Table Location? What is a Hint? How to Delete Duplicate Rows? Why the Trigger Fires Multiple Times in Single Login? 2012 CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis Shortcut key is CTRL+SHIFT+]. This key can be very useful when dealing with multiple subqueries, CTE or query with multiple parentheses. When exercised this shortcut key it selects T-SQL code between two parentheses. Monday Morning Puzzle – Query Returns Results Sometimes but Not Always I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Remove Debug Button in SSMS – SQL in Sixty Seconds #020 – Video Effect of Case Sensitive Collation on Resultset Collation is a very interesting concept but I quite often see it is heavily neglected. I have seen developer and DBA looking for a workaround to fix collation error rather than understanding if the side effect of the workaround. Switch Between Two Parenthesis using Shortcut CTRL+] Earlier this week I wrote a blog post about CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis, I received quite a lot of positive feedback from readers. If you are a regular reader of the blog post, you must be aware that I appreciate the learning shared by readers. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Tuesday, November 30, 2010

    CodePlex Daily Summary for Tuesday, November 30, 2010Popular ReleasesSense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.DotSpatial: DotSpatial 11-28-2001: This release introduces some exciting improvements. Support for big raster, both in display and changing the scheme. Faster raster scheme creation for all rasters. Caching of the "sample" values so once obtained the raster symbolizer dialog loads faster. Reprojection supported for raster and image classes. Affine transform fully supported for images and rasters, so skewed images are now possible. Projection uses better checks when loading unprojected layers. GDAL raster support f...Virtu: Virtu 0.9.0: Source Requirements.NET Framework 4 Visual Studio 2010 or Visual Studio 2010 Express Silverlight 4 Tools for Visual Studio 2010 Windows Phone 7 Developer Tools (which includes XNA Game Studio 4) Binaries RequirementsSilverlight 4 .NET Framework 4 XNA Framework 4SuperWebSocket: SuperWebSocket(60438): It is the first release of SuperWebSocket. Because it is base on SuperSocket, most features of SuperSocket are supported in SuperWebSocket. The source code include a LiveChat demo.MDownloader: MDownloader-0.15.25.7002: Fixed updater Fixed FileServe Fixed LetItBitNotepad.NET: Notepad.NET 0.7 Preview 1: Whats New?* Optimized Code Generation: Which means it will run significantly faster. * Preview of Syntax Highlighting: Only VB.NET highlighting is supported, C# and Ruby will come in Preview 2. * Improved Editing Updates (when the line number, etc updates) to be more graceful. * Recent Documents works! * Images can be inserted but they're extremely large. Known Bugs* The Update Process hangs: This is a bug apparently spawning since 0.5. It will be fixed in Preview 2. Until then, perform a fr...Cropper: 1.9.4: Mostly fixes for issues with a few feature requests. Fixed Issues 2730 & 3638 & 14467 11044 11447 11448 11449 14665 Implemented Features 6123 11581PFC: PFC for PB 11.5: This is just a migration from the 11.0 code. No changes have been made yet (and they are needed) for it to work properly with 11.5.PDF Rider: PDF Rider 0.5: This release does not add any new feature for pdf manipulation, but enables automatic updates checking, so it is reccomended to install it in order to stay updated with next releases. Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0...BCLExtensions: BCL Extensions v1.0: The files associated with v1.0 of the BCL Extensions library.XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.Sexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...New ProjectsActiveRecordTest: ActiveRecordTest is a sample project that is really a quick guide for start using Castle ActiveRecord within an ASP.NET web application.BacteriaManage: just test codeplexDS CMS: Diamond Shop - open source project. 1. ASP.NET MVC 3.0 2. Entity Framework 3. Jquery 4. LinqGeneral Media Access WebService: This project is focused on building a general purpose media access webservice based on WCF.JavaEE server for XUNU: C'est le serveur internet du site à ChoupieLearning management system: Learning management system to help teachers on their work.LogWriterReader using Named pipe: LogWriterReader using Named pipeNMix: NMix???EntLib,NHibernate,log4net??????????,????????????????,?????????、?????、????、????、?????????。Nosso Rico Dinheirinho: Financial control system like Microsoft Money, but via web.Post Template: Post Template (for now) is for craigslist posters looking to make their posts more visually appealing. Abstracting the styling and layout details of HTML and CSS, Post Template eliminates the need to know these languages when posting. Post Template is mostly written in C#.SharePoint Silverlight Clock: SharePoint Silverlight ClockSilverlight MVVM wizard using Caliburn Micro: This MVVM style Silverlight 4 wizard shows some Caliburn Micro features, as well as the use of MEF and MVVM style unit testing. The UI and code are based on the code accompanying the "Code Project" article "Creating an Internationalized Wizard in WPF" from dec. 2008.Spider Framework: A ruler-based spider framework developing with C#syx Open Source Project: syx Open Source ProjectTigerCat: TigerCat will support application development as infrastructure and RAD tools.TitleNetSolution: This my team Solution.!Uploadert: UploadertWidget Suite for DotNetNuke: This project is intended to hold a suite of useful widgets to make your skinning easier, and raise the level of interactivity with DotNetNuke website visitors.ZenBridge for Picasa: ZenBridge for Picasa makes it easy for Zenfolio users to upload edited images directly to a chosen Zenfolio gallery. It's developed in C#.NET 4.

    Read the article

  • vsftpd not allowing uploads. 550 response.

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • CodePlex Daily Summary for Sunday, June 30, 2013

    CodePlex Daily Summary for Sunday, June 30, 2013Popular ReleasesGardens Point LEX: Gardens Point LEX version 1.2.1: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.2.1 has new facilities for defining and manipulating character classes. These changes make the construction of large Unicode character classes more convenient. The runtime code for performing automaton backup has been re-implemented, and is now faster for scanners that need backup. Source CodeThe distribution contains a complete VS2010 project for the appli...ZXMAK2: Version 2.7.5.7: - fix TZX emulation (Bruce Lee, Zynaps) - fix ATM 16 colors for border - add memory module PROFI 512K; add PROFI V03 rom image; fix PROFI 3.XX configVidCoder: 1.5.1 Beta: Added French translation. Updated HandBrake core to SVN 5614. Fixed crash in locales with , as the decimal marker. Fixed a few warnings for compression level and quality on audio encodings.Twitter image Downloader: Twitter Image Downloader 2 with Installer: Application file with Install shield and Dot Net 4.0 redistributableUltimate Music Tagger: Ultimate Music Tagger 1.0.0.0: First release of Ultimate Music TaggerBlackJumboDog: Ver5.9.2: 2013.06.28 Ver5.9.2 (1) ??????????(????SMTP?????)?????????? (2) HTTPS???????????Outlook 2013 Add-In: Configuration Form: This new version includes the following changes: - Refactored code a bit. - Removing configuration from main form to gain more space to display items. - Moved configuration to separate form. You can click the little "gear" icon to access the configuration form (still very simple). - Added option to show past day appointments from the selected day (previous in time, that is). - Added some tooltips. You will have to uninstall the previous version (add/remove programs) if you had installed it ...Terminals: Version 3.0 - Release: Changes since version 2.0:Choose 100% portable or installed version Removed connection warning when running RDP 8 (Windows 8) client Fixed Active directory search Extended Active directory search by LDAP filters Fixed single instance mode when running on Windows Terminal server Merged usage of Tags and Groups Added columns sorting option in tables No UAC prompts on Windows 7 Completely new file persistence data layer New MS SQL persistence layer (Store data in SQL database)...NuGet: NuGet 2.6: Released June 26, 2013. Release notes: http://docs.nuget.org/docs/release-notes/nuget-2.6Python Tools for Visual Studio: 2.0 Beta: We’re pleased to announce the release of Python Tools for Visual Studio 2.0 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, and cross platform debugging support. For a quick overview of the general IDE experience, please watch this video: http://www.youtube.com/watch?v=TuewiStN...Player Framework by Microsoft: Player Framework for Windows 8 and WP8 (v1.3 beta): Preview: New MPEG DASH adaptive streaming plugin for Windows Azure Media Services Preview: New Ultraviolet CFF plugin. Preview: New WP7 version with WP8 compatibility. (source code only) Source code is now available via CodePlex Git Misc bug fixes and improvements: WP8 only: Added optional fullscreen and mute buttons to default xaml JS only: protecting currentTime from returning infinity. Some videos would cause currentTime to be infinity which could cause errors in plugins expectin...AssaultCube Reloaded: 2.5.8: SERVER OWNERS: note that the default maprot has changed once again. Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we continue to try to package for those OSes. Or better yet, try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compi...Compare .NET Objects: Version 1.7.2.0: If you like it, please rate it. :) Performance Improvements Fix for deleted row in a data table Added ability to ignore the collection order Fix for Ignoring by AttributesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.95: update parser to allow for CSS3 calc( function to nest. add recognition of -pponly (Preprocess-Only) switch in AjaxMinManifestTask build task. Fix crashing bug in EXE when processing a manifest file using the -xml switch and an error message needs to be displayed (like a missing input file). Create separate Clean and Bundle build tasks for working with manifest files (AjaxMinManifestCleanTask and AjaxMinBundleTask). Removed the IsCleanOperation from AjaxMinManifestTask -- use AjaxMinMan...VG-Ripper & PG-Ripper: VG-Ripper 2.9.44: changes NEW: Added Support for "ImgChili.net" links FIXED: Auto UpdaterDocument.Editor: 2013.25: What's new for Document.Editor 2013.25: Improved Spell Check support Improved User Interface Minor Bug Fix's, improvements and speed upsCY_MVC: sample 1.0: sample 1.0WPF Composites: Version 4.3.0: In this Beta release, I broke my code out into two separate projects. There is a core FasterWPF.dll with the minimal required functionality. This can run with only the Aero.dll and the Rx .dll's. Then, I have a FasterWPFExtras .dll that requires and supports the Extended WPF Toolkit™ Community Edition V 1.9.0 (including Xceed DataGrid) and the Thriple .dll. This is for developers who want more . . . Finally, you may notice the other OPTIONAL .dll's available in the download such as the Dyn...Channel9's Absolute Beginner Series: Windows Phone 8: Entire source code for the Channel 9 series, Windows Phone 8 Development for Absolute Beginners.Indent Guides for Visual Studio: Indent Guides v13: ImportantThis release does not support Visual Studio 2010. The latest stable release for VS 2010 is v12.1. Version History Changed in v13 Added page width guide lines Added guide highlighting options Fixed guides appearing over collapsed blocks Fixed guides not appearing in newly opened files Fixed some potential crashes Fixed lines going through pragma statements Various updates for VS 2012 and VS 2013 Removed VS 2010 support Changed in v12.1: Fixed crash when unable to start...New Projects2d Minecraft Skin Assembler: 2d Minecraft skin assembler downloads a copy of a Minecraft player skin and rearranges the various parts.Arduino Template Express: Arduino Template Express (ATE) provides a wizard driven approach to the creation of libraries and sketches for multiple Arduino development boards. ATE is the ASP.NET Identity / Membership Database (SSDT): Web application membership database seed project which can be used to bootstrap your custom ASP.NET Identity / Membership solutionC# Lite Framework: C# Lite Framework CSCore: .NET Audio libraryGraph based image segmentation: This is an parallel implementation of graph based image segmentation algorithm.Indico Automatic Downloader: Automatically download new files in an Indico agenda. Runs every 5 minuets, and pulls down changes. Hopefully, no more having to click to get latest version!l5lab: some code practiseLAPR4: Academic project in the scope of "Laboratory / Project 4" (2nd year of Informatics Engineering). This is a fork of CleanSheets.NH-New: This is a ample projectogloszenia: projekt na bazy dancyhpev - A managed PE file viewer: pev - A managed PE file viewer TidyVidBack - a video background, responsive skin for DNN / DotNetNuke: This is a full-screen video background, responsive skin design for use in DNN - DotNetNuke based on Adammer's Tidy Skin.

    Read the article

  • snmptt not translating traps, even with translate_log_trap_oid=1

    - by mbrownnyc
    I am having some trouble configuring snmptt to properly translate snmp traps. The following is a problem: /etc/snmp/snmptt.conf reflects: EVENT fgFmTrapIfChange .1.3.6.1.4.1.12356.101.6.0.1004 "Status Events" Critical FORMAT $* EXEC /usr/local/nagios/libexec/eventhandlers/submit_check_result $r "snmp_traps" 2 "$O: $+*" "$*" SDESC Trap is sent to the managing FortiManager if an interface IP is changed Variables: 1: fnSysSerial 2: ifName 3: fgManIfIp 4: fgManIfMask EDESC when a trap is received, /var/log/messages reflects: Sep 6 12:07:32 SNMPMANAGERHOST snmptrapd[15385]: 2012-09-06 12:07:32 <UNKNOWN> [UDP: [192.168.100.2]:162->[192.168.100.31]]: #012.1.3.6.1.2.1.1.3.0 = Timeticks: (707253943) 81 days, 20:35:39.43 #011.1.3.6.1.6.3.1.1.4.1.0 = OID: .1.3.6.1.4.1.12356.101.6.0.1004 #011.1.3.6.1.4.1.12356.100.1.1.1.0 = STRING: FGTNNNNNNNNN #011.1.3.6.1.2.1.31.1.1.1.1.10 = STRING: internal4 #011.1.3.6.1.4.1.12356.101.6.2.1.0 = IpAddress: 192.168.65.100 #011.1.3.6.1.4.1.12356.101.6.2.2.0 = IpAddress: 255.255.255.0 Sep 6 12:07:37 SNMPMANAGERHOST icinga: EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT; 192.168.100.2; snmp_traps; 2; enterprises.12356.101.6.0.1004: enterprises.12356.100.1.1.1.0:FGTNNNNNNNNN ifName.10:internal4 enterprises.12356.101.6.2.1.0:192.168.65.100 enterprises.12356.101.6.2.2.0:255.255.255.0 Since the icinga entry reflects the EXEC, it's obvious there is no translations occurring by snmptt. I have verified that translate_log_trap_oid and net_snmp_perl_enable is enabled in snmptt.ini When using --debug=1 to start snmptt, I see the following in the --debugfile: ********** Net-SNMP version 5.05 Perl module enabled ********** The main NET-SNMP version is reported as NET-SNMP version: 5.5. What else can be done to verify that snmptt is configured properly to translate traps? I have run snmptt-net-snmp-test to verify whatever net-snmp-perl version I have installed properly supports translations. The output indicates it does. /root/snmptt_1.3/snmptt-net-snmp-test --best_guess=2 SNMPTT Net-SNMP Test v1.0 (c) 2003 Alex Burger http://snmptt.sourceforge.net MIBS:RFC1213-MIB best_guess: 2 Testing translateObj ******************** Testing: .1.3.6.1.2.1.1.1, long_names=disabled, include_module=disabled Test passed. Result: sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=disabled, include_module=enabled Test passed. Result: RFC1213-MIB::sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=enabled, include_module=disabled Test passed. Result: .iso.org.dod.internet.mgmt.mib-2.system.sysDescr Testing: .1.3.6.1.2.1.1.1, long_names=enabled, include_module=enabled Test passed. Result: RFC1213-MIB::.iso.org.dod.internet.mgmt.mib-2.system.sysDescr Testing: sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: RFC1213-MIB::sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: RFC1213-MIB::system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing: .iso.org.dod.internet.mgmt.mib-2.system.sysDescr, long_names=disabled, include_module=disabled Test passed. Result: .1.3.6.1.2.1.1.1 Testing getType *************** Testing: .1.3.6.1.2.1.4.1 Test passed. Result: INTEGER Testing: ipForwarding Test passed. Result: INTEGER Testing Description ******************* Test passed. Result: ------------------------------------------------- The indication of whether this entity is acting as an IP gateway in respect to the forwarding of datagrams received by, but not addressed to, this entity. IP gateways forward datagrams. IP hosts do not (except those source-routed via the host). Note that for some managed nodes, this object may take on only a subset of the values possible. Accordingly, it is appropriate for an agent to return a `badValue' response if a management station attempts to change this object to an inappropriate value. ------------------------------------------------- I have manually gone through the MIB with the definition that's not resolving, and verified that it is properly linking back to the proper resolved definition. It is: FORTINET-FORTIGATE-MIB.txt contains: fgFmTrapIfChange NOTIFICATION-TYPE OBJECTS { fnSysSerial, ifName, fgManIfIp, fgManIfMask } STATUS current DESCRIPTION "Trap is sent to the managing FortiManager if an interface IP is changed" ::= { fgFmTrapPrefix 1004 } fgFmTrapPrefix OBJECT IDENTIFIER ::= { fgMgmt 0 } fgMgmt OBJECT IDENTIFIER ::= { fnFortiGateMib 6 } fnFortiGateMib ::= { fortinet 101 } IMPORTS FnBoolState, FnIndex, fnAdminEntry, fnSysSerial, fortinet FROM FORTINET-CORE-MIB fortinet MODULE-IDENTITY ::= { enterprises 12356 } LOOKS GOOD!!!!! 1.3.6.1.4.1.12356.101.6.0.1004 I've exhausted all the documentation and even posted fruitlessly in the snmptt-users mailing list. I can not prove it is the MIB. Why would snmptt fail to translate traps? Thanks, Matt

    Read the article

  • Vectorization of matlab code for faster execution

    - by user3237134
    My code works in the following manner: 1.First, it obtains several images from the training set 2.After loading these images, we find the normalized faces,mean face and perform several calculation. 3.Next, we ask for the name of an image we want to recognize 4.We then project the input image into the eigenspace, and based on the difference from the eigenfaces we make a decision. 5.Depending on eigen weight vector for each input image we make clusters using kmeans command. Source code i tried: clear all close all clc % number of images on your training set. M=1200; %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=60; ustd=32; %read and show images(bmp); S=[]; %img matrix for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); [irow icol d]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(permute(img,[2,1,3]),[irow*icol,d]); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. % Change image for manipulation dbx=[]; % A matrix for i=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop ed_min=[]; srcFiles = dir('G:\newdatabase\*.jpg'); % the folder in which ur images exists for b = 1 : length(srcFiles) filename = strcat('G:\newdatabase\',srcFiles(b).name); Imgdata = imread(filename); InputImage=Imgdata; InImage=reshape(permute((double(InputImage)),[2,1,3]),[irow*icol,1]); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage = temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end noe=numel(InImWeight); % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end ed_min=[ed_min MinimumValue]; theta=6.0e+03; %disp(e) z(b,:)=InImWeight; end IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); Running time for 50 images:Elapsed time is 103.947573 seconds. QUESTIONS: 1.It is working fine for M=50(i.e Training set contains 50 images) but not for M=1200(i.e Training set contains 1200 images).It is not showing any error.There is no output.I waited for 10 min still there is no output. I think it is going infinite loop.What is the problem?Where i was wrong?

    Read the article

  • How to set up port forwarding and firewall settings for torrents using Transmsission on Mac OSX 10.5

    - by Liz
    I have picked up bits of advice here and there on the internet and got someway through this tortuous exercise (after it took 18 hours to download the first torrent I tried yesterday - magnet-link for a film). Where I have got stuck is with configuring the firewall on the Netgear Router but I am not sure if I have caused the problem myself by something else I have done configuring the Mac System Preferences for Security or Networking. I have been following the sections of these instructions that seem to apply, although they are written for a different OSX version (don't know which one, but the screen shots do not match what I see) and I am not wanting to set up my Mac as a server and attending to the parts that apply to port forwarding for Netgear rather than LinkSys: http://homepage.mac.com/car1son/static_port_fwd_intro.html I have been trying to follow these instructions: Instructions for DG834, DG834G, DG824M, FR114W, FM114P, FR114P, FR328S, FVL328, FVS328, FVS338, FVX538, FWAG114, FWG114P, or FVS318v3 These routers do port forwarding by assigning port numbers to a "service" associated with the application you want to run. "Rules" are set for particular services. Rules block or allow access, based on various conditions such as the time of day and the name of the service. To Create a New Inbound or Outbound Rule 1. Submit the router's address in an Internet browser. (The default is 192.168.0.1). 2. Enter the router's username and password. 3. From the main menu, click Security > Rules. 4. Click Add for inbound or outbound traffic, as appropriate to the application you are planning to run. 5. Select the Service. The services the router knows about are listed in the drop down. If the service you want is not listed, add it as described in the next section. 6. Select the Action, for example ALLOW always. 7. For Send to LAN Server, enter the IP address of the local server. Note that this is also the IP address the computers on your LAN will access. 8. For WAN User choose Any, or limit access to particular IP addresses. 9. For Log selection it is reasonable to turn logs on, especially at the beginning when you are unsure of the result of the changes you are making. Later, you may want to set logs to "Never" for performance reasons. 10. Click Apply. As noted in user manual for some models: * Consider using the Dynamic DNS feature on the Advanced menu, so that external users can find your network when the DHCP lease is renewed by your ISP. * If your own LAN server uses DHCP, and your IPs change on rebooting, consider using the Reserved IP Address feature in the LAN IP menu. To Add a Service for These Routers 1. Click Security > Services > Add Custom Service. 2. Enter any name you choose for the service. 3. Select whether the service is to use TCP or UDP. If you are unsure, select both. 4. Enter the lowest port number used by the service. 5. Enter the highest port number used. If the service uses only one port number, enter the same number. 6. Click Apply. There is no "Security - Rules" submenu in the Netgear page, so I have been trying to access "Security - Firewall Rules". I can access everthing else in the Netgear settings as Admin but I cannot get the "Firewall Rules" section to open up. (I am not 100% sure I will know exactly what to do if and when I do get it opened up!) I haven't managed to find though searching the internet any instructions that would seem to apply specifically to what I am trying to achieve, so would be very grateful if someone could either point me in the right direction or give me some advice directly. Best wishes, Liz

    Read the article

  • CodePlex Daily Summary for Thursday, August 21, 2014

    CodePlex Daily Summary for Thursday, August 21, 2014Popular ReleasesOutlook 2013 Backup Add-In: Outlook Backup Add-In 1.3: Changelog for new version: Added button in config-window to reset the last backup-time (this will trigger the backup after closing outlook) Minimum interval set to 0 (backup at each closing of outlook) Catch exception when data store entry is corrupt Added two parameters (prefix and suffix) to automatically rename the backup file Updated VSTO-Runtime to 10.0.50325 Upgraded project to Visual Studio 2013 Added optional command to run after backup (e.g. pack backup files, ...) Add...File Explorer for WPF: FileExplorer3_20August2014: Please see Aug14 Update.ODBC Connect: v1.0: ODBC Connect executables for both 32bit and 64bit ODBC data sourcesMSSQL Deployment Tool: Microsoft SQL Deploy Tool v1.3.1: MicrosoftSqlDeployTool: v1.3.1.38348 What's changed? Update namespace and assembly name. Bug fixing.SharePoint 2013 Search Query Tool: SharePoint 2013 Search Query Tool v2.1: Layout improvements Bug fixes Stores auth method and user name Moved experimental settings to Advanced boxCtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.2.41183 Alpha: This alpha of the CtrlAltStudio Viewer provides some preliminary Oculus Rift DK2 support. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-2-41183-alpha Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http://ctrlaltstudio.com/viewer/privacy Disclaimer: This software is not provided or supported by Linden Lab, the makers of Second Life.HDD Guardian: HDD Guardian 0.6.1: New: package now include smartctl 6.3; Removed: standard notification e-mail. Now you have to set your mail server to send e-mail alerts; Bugfix: USB detection error; custom e-mail server settings issue; bottom panel displays a wrong ATA error count.VG-Ripper & PG-Ripper: VG-Ripper 2.9.62: changes NEW: Added Support for 'MadImage.org' links NEW: Added Support for 'ImgSpot.org' links NEW: Added Support for 'ImgClick.net' links NEW: Added Support for 'Imaaage.com' links NEW: Added Support for 'Image-Bugs.com' links NEW: Added Support for 'Pictomania.org' links NEW: Added Support for 'ImgDap.com' links NEW: Added Support for 'FileSpit.com' links FIXED: 'ImgSee.me' linksExchange Database Recovery With and Without Log Files is Possible: Exchange Recovery Application: This Exchange Recovery Software comes with free trial edition which helps users to inspect the working capability of the recovery process. Download free demo version and repair inaccessible mailboxes from EDB file without any obstructions.MongoRepository: MongoRepository 1.6.6: Installing using NuGet (recommended)MongoRepository is now a NuGet package for your convenience. Step-by-step instructions can be found in Installing MongoRepository using NuGet Installing using BinariesYou can also choose to download the binaries instead of using NuGet. There are 2 downloads: mongorepository_full.x.x.x contains all binaries required (MongoRepository and the 10gen C# driver) mongorepository.x.x.x contains only the MongoRepository binary Make sure you reference MongoReposit...Cryptography Enumerations JavaScript Shell: Cryptography Enumerations JavaScript Shell 1.0.0: First ReleaseMagick.NET: Magick.NET 7.0.0.0001: Magick.NET linked with ImageMagick 7-Beta.CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.2: This release adds the following new features and bug fixes from CMake Tools for Visual Studio 1.1: Added support for CMake 3.0. Added support for word completion. Added IntelliSense support for the CMAKEHOSTSYSTEM_INFORMATION command. Fixed syntax highlighting for tokens beginning with escape sequences. Fixed issue uninstalling CMake Tools for Visual Studio after Visual Studio has been uninstalled.GW2 Personal Assistant Overlay: GW2 Personal Assistant Overlay 1.1: Overview1.1 is the second 'stable' release of the GW2 Personal Assistant Overlay. This version includes just a couple of very minor features and some minor bug fixes. For details regarding installation, setup, and general use, see Documentation. Note: If you were using a previous version, you will probably want to copy over the following user settings files: GW2PAO.DungeonSettings.xml GW2PAO.EventSettings.xml GW2PAO.WvWSettings.xml GW2PAO.ZoneCompletionSettings.xml New FeaturesAdded new "No...Fluentx: Fluentx v1.5.3: Added few more extension methods.fastJSON: v2.1.2: 2.1.2 - bug fix circular referencesJPush.NET: JPush Server SDK 1.2.1 (For JPush V3): Assembly: 1.2.1.24728 JPush REST API Version: v3 JPush Documentation Reference .NET framework: v4.0 or above. Sample: class: JPushClientV3 2014 Augest 15th.SEToolbox: SEToolbox 01.043.008 Release 1: Changed ship/station names to use new DisplayName instead of Beacon/Antenna. Fixed issue with updated SE binaries 01.043.018 using new Voxel Material definitions.Google .Net API: Drive.Sample: Google .NET Client API – Drive.SampleInstructions for the Google .NET Client API – Drive.Sample</h2> http://code.google.com/p/google-api-dotnet-client/source/browse/?repo=samples#hg%2FDrive.SampleBrowse Source, or main file http://code.google.com/p/google-api-dotnet-client/source/browse/Drive.Sample/Program.cs?repo=samplesProgram.cs <h3>1. Checkout Instructions</h3> <p><b>Prerequisites:</b> Install Visual Studio, and <a href="http://mercurial.selenic.com/">Mercurial</a>.</p> ...FineUI - jQuery / ExtJS based ASP.NET Controls: FineUI v4.1.1: -??Form??????????????(???-5929)。 -?TemplateField??ExpandOnDoubleClick、ExpandOnEnter、ExpandToSelectRow????(LZOM-5932)。 -BodyPadding???????,??“5”“5 10”,???????????“5px”“5px 10px”。 -??TriggerBox?EnableEdit=false????,??????????????(Jango_Jing-5450)。 -???????????DataKeyNames???????????(yygy-6002)。 -????????????????????????(Gnid-6018)。 -??PageManager???AutoSizePanelID????,??????????????????(yygy-6008)。 -?FState???????????????,????????????????(????-5925)。 -??????OnClientClick???return?????????(FineU...New ProjectsAesonFramework: Aeson FrameworkBullet for Windows: Bullet is used to simulate collision detection, soft and rigid body dynamics. This is an early version of Bullet with support for Windows and Windows Phone.CC-Homework: This is a collection of homework projects completed during coder camps.Integrating Exchange Server 2010 Mail Attachments with SharePoint 2013 via C#.: Integrating Exchange Server 2010 Mail Attachments with SharePoint 2013 via C#.Mosquito.ViewModel: A minimal MVVM library aimed at removing the pain of implementing ViewModels.OWL API for .NET: An open source project that port the java OWLAPI to .Net. It uses IKVM and contains scripts to compile the OWLAPI libraries and Java reasoners with Samples.PCStoreManager: Progetto integrativo corso programmazione ad oggettiStockEr: Spa application for stocks analysis.SysLog Server: This is a free Syslog server for windows.UnixtimeHelpers: This project minimalistic assembly contains unixtime converters. Converting DateTime to unixtime (double type respresent) and vice versa.Winner - Scommessa vincente: Winner permette di visualizzare le quote dei prossimi maggiori incontri sportivi e di compilare una schedina con una o più scommesse.

    Read the article

  • Open source CMS for a university department

    - by Greg Kuperberg
    I realize that this type of question gets asked over and over again. Nonetheless, I want to ask a more specific version. I'm in a university math department. Long ago our sysadmins (or just one at the time) switched to a web content management system. At the time, Zope looked like an informed choice. We have used Zope for years, but at least in my opinion, it has always been a controversial decision. At the time I didn't understand why it was so important to have a web CMS. Now I see that it certainly is important, but I don't know that it should be Zope. The good (even necessary) features of Zope for us are: It's free and Linux-based. It is a true CMS and not something else (e.g. wiki or blog) It lets you write HTML and scripts. What I really don't like about Zope is that the outcome of using it is all-or-nothing in a lot of ways. At least in convenient use, it ends up dividing the enterprise into superusers who can do everything, and lusers who can't do anything (except write their own home pages in plain HTML). It has a huge user manual, which end users won't have time to read. Somehow with the access permissions, the simple thing to do is to let a few admins access all of the source and data and that's it. Since this is a math department, the user base varies from real novices to people who understand computers reasonably well. But as it stands, any change that involves Zope has to go through the sysadmins. When the sysadmins are in a hurry, sometimes they will also just add plain HTML pages to the web site instead of using the Zope framework. It doesn't help matters that Zope is fairly disk-intensive and fairly hype-intensive. Not to dwell on Zope too much, but I am wondering what is the right web CMS for a mixed user base of terminal novices, quick studies, and experienced users. Some users might want intermediate permissions, e.g. read permission but not write permission, or permission to change some subset of the pages or see some subset of the database tables. Also it should be Linux-based and open source and a little bit scalable, and of course widely used and well-supported is a good idea. I might guess that the answer is Drupal just because that was the general answer before, but I don't know if it is the right type of CMS for this purpose. (But note that Python is a relatively popular language in a math department, among other reasons because Sage is based on Python.) I can see that I didn't completely define the question and that people are guessing what type of site it is. It is the UC Davis Math Department. The main structure of the site is not suitable for a wiki and it is also not the same thing as a course environment like Moodle. Rather, the site is mostly structured as a generic medium-small enterprise. Some components of the site could be a wiki, Moodle, LaTeX plugin, Request Tracker, etc. However, the main issue is not these components. The main issue is that it would be better to decentralize management of the site. Right now, everything that is in the Zope CMS has to go through the sysadmins. Every other user in the department either has to put in a request to them, or write their own web pages with no help from Zope. There are two main reasons for this: (1) Other people in the department don't have time to read the Zope manual. (2) It's a hassle to set up intermediate permissions in Zope. However, there are other people in the department who know how to write computer programs and use markup languages. I wouldn't want a solution that assumes that users either can't be trusted with much more than drag-and-drop, or that they are IT professionals who sleep with documentation manuals. I'm wondering if Plone/Zope still has this quality, since certainly Zope by itself does. But I also wonder sometimes if common-sense flexibility is unfashionable these days, and that things in general have be either mindlessly easy or incredibly powerful.

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • Clarity is important, both in question and in answer.

    - by gerrylowry
    clarity is important ... i'm often reminded of the Clouseau movie in which Peter Sellers as Chief Inspector Clouseau asks a hotel clerk "Does your dog bite?" ... the clerk answers "no" ... after Clouseau has been bitten by the dog, he looks at the hotel clerk who says "That's not my dog".  Clarity is important, both in question and in answer. i've been a member of forums.asp.net since 2008 ... like many of my peers at forums.asp.net, i've answered my fair share of questions. FWIW, the purpose of this, my first web log post to http://weblogs.asp.net/gerrylowry is to help new members ask better questions and in turn get better answers. TIMTOWTDI  =.  there is more than one way to do it imho, the best way to ask a question in any forum, or even person to person, is to first formulate your question and then ask yourself to answer your own question. Things to consider when asking (the more complete your question, the more likely you'll get the answer you require): -- have you searched Google and/or your favourite search engine(s) before posting your question to forums.asp.net; examples: site:msdn.microsoft.com entity framework 5.0 c#http://lmgtfy.com/?q=site%3Amsdn.microsoft.com+entity+framework+5.0+c%23 site:forums.asp.net MVC tutorial c#http://lmgtfy.com/?q=site%3Aforums.asp.net+MVC+tutorial+c%23 -- are you asking your question in the correct forum?  look at the forums' descriptions at http://forums.asp.net/; examples: Getting Started If you have a general ASP.NET question on a topic that's not covered by one of the other more specific forums - ask it here. MVC Discussions regarding ASP.NET Model-View-Controller (MVC) C# Questions about using C# for ASP.NET development Note:  if your question pertains more to c# than to MVC, choosing the C# forum is likely to be more appropriate. -- is your post subject clear and concise, yet not too vague? compare these three subjects (all three had something to do with GridView):     (1)    please help     (2)    gridview      (3)    How to show newline in GridView  -- have you clearly explained your scenario? compare:  my leg hurts   with   when i walk too much, my right knee hurts in the knee joint  compare:  my code does not work    with    when i enter a date as 2012-11-8, i get a FormatException -- have you checked your spelling, your grammar, and your English? for better or worse, English is the language of forums.asp.net ... many of the currently 170000++ forums.asp.net are not native speakers of English; that's okay ... however, there are times when choosing the more appropriate words will likely get one a better answer; fortunately, there are web tools to help you formulate your question, for example, http://translate.google.com/.  -- have you provided relevant information about your environment? here are a few examples ... feel free to include other items to your question ... rule of thumb:  if you think a given detail is relevant, it likely is -- what technology are you using?    ASP.NET MVC 4, ASP.NET MVC 3, WebForms, ...  -- what version of Visual Studio are you using?  vs2012 (ultimate, professional, express), vs2010, vs2008 ... -- are you hosting your own website?  are you using a shared hosting service? -- are you experience difficulties in just one browser? more than one browser? -- what browser version(s) are you using?   ie8? ie9? ... -- what is your operating system?     win8, win7, vista, XP, server 2008 R2 ... -- what is your database?   SQL Server 2008 R2, ss2005, MySQL, Oracle, ... -- what is your web server?  iis 7.5, iis 6, .... -- have you provided enough information for someone to be able to answer your question? Here's an actual example from an O.P. that i hope is self-explanatory: I'm trying to make a simple calculator when i write the code in windows application it worked when i tried it in web application it doesn't work and there are no errors what should i do ??!! -- have you included unnecessary information? more than once, i've seen the O.P. (original post, original poster) include many extra lines of code that were not relevant to the actual question; the more unnecessary code that you include, the less likely your volunteer peers will be motivated to donate their time to help you. -- have you asked the question that you want answered? "Does this dog bite?" -- are your expectations reasonable? -- generally, persons who are going to answer your questions are your peers ... they are unpaid volunteers ... -- are you looking for help with your homework, work assignment, or hobby? or, are you expecting someone else to do your work for you?  -- do you expect a complete solution or are you simply looking for guidance and direction? -- you are likely to get more help by first making a reasonable effort to help yourself first Clarity is important, both in question and in answer. if you are answering someone else's question, please remember that clear answers are just as important as clear questions; would you understand your own answer? Things to consider when answering: -- have you tested your code example?  if you have, say so; if you've not tested your code example, also say so -- imho, it's okay to guess as long as you clearly state that you're guessing ... sometimes a wrong guess can still help the O.P. find her/his way to the right answer -- meanness does not contribute to being helpful; sometimes one may become frustrated with the O.P. and/or others participating in a thread, if that happens to you, be kind regardless; speaking from my own experience, at least once i've allowed myself to be frustrated into writing something inappropriate that i've regretted later ... being a meany does not feel good ... being kind and helpful feels fantastic! Tip:  before asking your question, read more than a few existing questions and answers to get a sense of how your peers ask and answer questions. Gerry P.S.:  try to avoid necroposting and piggy backing. necroposting is adding to an old post, especially one that was resolved months ago. piggy backing is adding your own question to someone else's thread.

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Configuring Oracle HTTP Server 12c for WebLogic Server Domain

    - by Emin Askerov
    Oracle HTTP Server (OHS) 12c 12.1.2 which was released in July 2013 as a part of Oracle Web Tier 12c is the web server component of Oracle Fusion Middleware. In essence this is Apache HTTP Server 2.2.22 (with critical bug fixes from higher versions) which includes modules developed specifically by Oracle. It provides a listener functionality for Oracle WebLogic Server and the framework for hosting static pages, dynamic pages, and applications over the Web. OHS can be easily managed by Weblogic Management Framework, a set of tools which provides administrative capabilities (start, stop, lifecycle operations, etc.) for Oracle Fusion Middleware products. In other words all tools which are familiar to us (Node Manager, WLST, Administration Console, Fusion Middleware Control etc.) presented as a part of Weblogic Management Framework and using for managing Java and System Components both for Weblogic Server and Standalone Domain types. You can familiarize yourself with these terms using related documentation: 1. Introduction to Oracle HTTP Server: http://docs.oracle.com/middleware/1212/webtier/index.html 2. Weblogic Management Framework: http://docs.oracle.com/middleware/1212/core/ASCON/terminology.htm#ASCON11260 In the given post I would like to cover rather simple use case how to configure OHS as web proxy in Weblogic Cluster environment. For example, we have existing Weblogic Domain where some managed servers have been joined to cluster and host business applications. We need to configure web proxy component which will act as entry point, load balancer for our cluster for user requests. Of course, we could install old good Apache HTTP Server and configure mod_wl plugin. However this solution not optimal from manageability perspective: we need to install Apache, install additional plugin then configure it by editing configuration file which is not really convenient for FMW Administrators and often increase time of performing of simple administrative task. Alternatively, we could use OHS as System Component within Weblogic Domain and use full power of Weblogic Management Framework in order to configure, manage and monitor it! I like this idea! What about you? I hope after reading this post you will agree with me. First of all it is necessary to download OHS binaries. You can use this link for downloading: http://www.oracle.com/technetwork/java/webtier/downloads/index2-303202.html As we will use Fusion Middleware Control for managing OHS instances it is necessary to extend your domain with Enterprise Manager and Oracle ADF and JRF templates. This is not topic for focusing in this post, but you could get more information from documentation or one of my previous posts: http://docs.oracle.com/middleware/1212/wls/WLDTR/fmw_templates.htm#sthref64 https://blogs.oracle.com/imc/entry/the_specifics_of_adf_12c Note: you should have properly configured Node Manager utility for managing OHS instances Let’s consider configuration process step by step: 1. Shut down all Weblogic instances of existing domain including Admin Server; 2. Install Oracle HTTP Server. You should use your Fusion Middleware Home Path (e.g. /u01/Oracle/FMW12) for Installation Location and select Colocated HTTP Server option as Installation Type. I will not focus on this topic in this post. All information related to OHS installation you could find here: http://docs.oracle.com/middleware/1212/webtier/WTINS/install_gui.htm#i1082009 3. Next we need to extend our existing domain with OHS component. In order to do this you should do the following: a. Run Fusion Middleware Configuration Wizard (ORACLE_HOME/oracle_common/common/bin/config.sh); b. On the step 1 select Update an existing domain option and point your Fusion Middleware Home Path; c. On the step 2 check Oracle HTTP Server, Oracle Enterprise Manager Plugin for WEBTIER templates; d. Go through other steps without any changes and finish configuration process. 4. Start Admin Server and all managed servers related to your cluster 5. Log in to Enterprise Manager FMW Control using http://<hostname>:<port>/em URL 6. Now we will create OHS instance within our Weblogic Domain Infrastructure. Navigate to Weblogic Domain -> Administration -> Create/Delete OHS menu item; 7. Enter to edit mode, clicking Changes -> Lock&Edit menu item; 8. Create new OHS instance clicking Create button; 9. Define Instance Name (e.g. DevOSH) and Machine parameters; 10. Now we need to define listen port. By default OHS will use 7777 port number for income HTTP requests. We could change it to any free port number we would like to use. In order to do it, right click on our created OHS instance (left hand panel) and navigate to Administration -> Port Configuration; 11. Click on record with port number 7777 and then click Edit button; 12. Change port number value (in our case this will be 8080) and then click OK button; 13. Now we need to edit mod_wl_ohs configuration in order to enable OHS to act as proxy for WebLogic Server Instances/Cluster; 14. In order to do it right click on our created OHS instance (left panel) and navigate to Administration -> mod_wl_ohs Configuration; a. In Weblogic Cluster you should enter cluster address (define <host:port> for all managed servers which participated in cluster), e.g: 192.168.56.2:7004,192.168.56.2:7005 b. Define Weblogic Port parameter at which the Oracle WebLogic Server host is listening for connection requests from the module (or from other servers); c. Check Dynamic Server List option. This will dynamically update cluster list for every request; d. In the Location table define list of endpoint locations which you would like to process. In order to do this click Add Row button and define Location, Weblogic Cluster, Path Trim and Path Prefix parameters (if required); e. Click Apply button in order to save changes. 15. Activate changes clicking Changes ? Activate Changes menu item; 16. Finally we will start configured OHS instance. Right click on OHS instance tree item under Web Tier folder, select Control -> Start Up menu item; 17. Ensure that OHS instance up and running and then test your environment. Run deployed application to your Weblogic Cluster accessing via OHS web proxy; Normal 0 false false false RU X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Using the OAM Mobile & Social SDK to secure native mobile apps - Part 2 : OAM Mobile & Social Server configuration

    - by kanishkmahajan
    Objective  In the second part of this blog post I'll now cover configuration of OAM to secure our sample native apps developed using the iOS SDK. First, here are some key server side concepts: Application Profiles: An application profile is a logical representation of your application within OAM server. It could be a web (html/javascript) or native (iOS or Android) application. Applications may have different requirements for AuthN/AuthZ, and therefore each application that interacts with OAM Mobile & Social REST services must be uniquely defined. Service Providers: Service providers represent the back end services that are accessed by applications. With OAM Mobile & Social these services are in the areas of authentication, authorization and user profile access. A Service Provider then defines a type or class of service for authentication, authorization or user profiles. For example, the JWTAuthentication provider performs authentication and returns JWT (JSON Web Tokens) to the application. In contrast, the OAMAuthentication also provides authentication but uses OAM SSO tokens Service Profiles:  A Service Profile is a logical envelope that defines a service endpoint URL for a service provider for the OAM Mobile & Social Service. You can create multiple service profiles for a service provider to define token capabilities and service endpoints. Each service provider instance requires atleast one corresponding service profile.The  OAM Mobile & Social Service includes a pre-configured service profile for each pre-configured service provider. Service Domains: Service domains bind together application profiles and service profiles with an optional security handler. So now let's configure the OAM server. Additional details are in the OAM Documentation and this post simply provides an outline of configuration tasks required to configure OAM for securing native apps.  Configuration  Create The Application Profile Log on to the Oracle Access Management console and from System Configuration -> Mobile and Social -> Mobile Services, select "Create" under Application Profiles. You would do this  step twice - once for each of the native apps - AvitekInventory and AvitekScheduler. Enter the parameters for the new Application profile: Name:  The application name. In this example we use 'InventoryApp' for the AvitekInventory app and 'SchedulerApp' for the AvitekScheduler app. The application name configured here must match the application name in the settings for the deployed iOS application. BaseSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM server.  Mobile Configuration: Enable this checkbox for any mobile applications. This enables the SDK to collect and send Mobile specific attributes to the OAM server.  Webview: Controls the type of browser that the iOS application will use. The embedded browser (default) will render the browser within the application. External will use the system standalone browser. External can sometimes be preferable for debugging URLScheme: The URL scheme associated with the iOS apps that is also used as a custom URL scheme to register O/S handlers that will take control when OAM transfers control to device. For the AvitekInventory and the AvitekScheduler apps I used osa:// and client:// respectively. You set this scheme in Xcode while developing your iOS Apps under Info->URL Types.  Bundle Identifier : The fully qualified name of your iOS application. You typically set this when you create a new Xcode project or under General->Identity in Xcode. For the AvitekInventory and AvitekScheduler apps these were com.us.oracle.AvitekInventory and com.us.oracle.AvitekScheduler respectively.  Create The Service Domain Select create under Service domains. Create a name for your domain (AvitekDomain is what I've used). The name configured must match the service domain set in the iOS application settings. Under "Application Profile Selection" click the browse button. Choose the application profiles that you created in the previous step one by one. Set the InventoryApp as the SSO agent (with an automatic priority of 1) and the SchedulerApp as the SSO client. This associates these applications with this service domain and configures them in a 'circle of trust'.  Advance to the next page of the wizard to configure the services for this domain. For this example we will use the following services:  Authentication:   This will use the JWT (JSON Web Token) format authentication provider. The iOS application upon successful authentication will receive a signed JWT token from OAM Mobile & Social service. This token will be used in subsequent calls to OAM. Use 'MobileOAMAuthentication' here. Authorization:  The authorization provider. The SDK makes calls to this provider endpoint to obtain authorization decisions on resource requests. Use 'OAMAuthorization' here. User Profile Service:  This is the service that provides user profile services (attribute lookup, attribute modification). It can be any directory configured as a data source in OAM.  And that's it! We're done configuring our native apps. In the next section, let's look at some additional features that were mentioned in the earlier post that are automated by the SDK for the app developer i.e. these are areas that require no additional coding by the app developer when developing with the SDK as they only require server side configuration: Additional Configuration  Offline Authentication Select this option in the service domain configuration to allow users to log in and authenticate to the application locally. Clear the box to block users from authenticating locally. Strong Authentication By simply selecting the OAAMSecurityHandlerPlugin while configuring mobile related Service Domains, the OAM Mobile&Social service allows sophisticated device and client application registration logic as well as the advanced risk and fraud analysis logic found in OAAM to be applied to mobile authentication. Let's look at some scenarios where the OAAMSecurityHandlerPlugin gets used. First, when we configure OAM and OAAM to integrate together using the TAP scheme, then that integration kicks off by selecting the OAAMSecurityHandlerPlugin in the mobile service domain. This is how the mobile device is now prompted for KBA,OTP etc depending on the TAP scheme integration and the OAM users registered in the OAAM database. Second, when we configured the service domain, there were claim attributes there that are already pre-configured in OAM Mobile&Social service and we simply accepted the default values- these are the set of attributes that will be fetched from the device and passed to the server during registration/authentication as device profile attributes. When a mobile application requests a token through the Mobile Client SDK, the SDK logic will send the Device Profile attributes as a part of an HTTP request. This set of Device Profile attributes enhances security by creating an audit trail for devices that assists device identification. When the OAAM Security Plug-in is used, a particular combination of Device Profile attribute values is treated as a device finger print, known as the Digital Finger Print in the OAAM Administration Console. Each finger print is assigned a unique fingerprint number. Each OAAM session is associated with a finger print and the finger print makes it possible to log (and audit) the devices that are performing authentication and token acquisition. Finally, if the jail broken option is selected while configuring an application profile, the SDK detects a device is jail broken based on configured policy and if the OAAM handler is configured the plug-in can allow or block access to client device depending on the OAAM policy as well as detect blacklisted, lost or stolen devices and send a wipeout command that deletes all the mobile &social relevant data and blocks the device from future access. 1024x768 Social Logins Finally, let's complete this post by adding configuration to configure social logins for mobile applications. Although the Avitek sample apps do not demonstrate social logins this would be an ideal exercise for you based on the sample code provided in the earlier post. I'll cover the server side configuration here (with Facebook as an example) and you can retrofit the code to accommodate social logins by following the steps outlined in "Invoking Authentication Services" and add code in LoginViewController and maybe create a new delegate - AvitekRPDelegate based on the description in the previous post. So, here all you will need to do is configure an application profile for social login, configure a new service domain that uses the social login application profile, register the app on Facebook and finally configure the Facebook OAuth provider in OAM with those settings. Navigate to Mobile and Social, click on "Internet Identity Services" and create a new application profile. Here are the relevant parameters for the new application profile (-also we're not registering the social user in OAM with this configuration below, however that is a key feature as well): Name:  The application name. This must match the name of the of mobile application profile created for your application under Mobile Services. We used InventoryApp for this example. SharedSecret: Enter a password here. This does not need to match any existing password. It is used as an encryption key between the client and the OAM Mobile and Social service.  Mobile Application Return URL: After the Relying Party (social) login, the OAM Mobile & Social service will redirect to the iOS application using this URI. This is defined under Info->URL type and we used 'osa', so we define this here as 'osa://' Login Type: Choose to allow only internet identity authentication for this exercise. Authentication Service Endpoint : Make sure that /internetidentityauthentication is selected. Login to http://developers.facebook.com using your Facebook account and click on Apps and register the app as InventoryApp. Note that the consumer key and API secret gets generated automatically by the Facebook OAuth server. Navigate back to OAM and under Mobile and Social, click on "Internet Identity Services" and edit the Facebook OAuth Provider. Add the consumer key and API secret from the Facebook developers site to the Facebook OAuth Provider: Navigate to Mobile Services. Click on New to create a new service domain. In this example we call the domain "AvitekDomainRP". The type should be 'Mobile Application' and the application credential type 'User Token'. Add the application "InventoryApp" to the domain. Advance the next page of the wizard. Select the  default service profiles but ensure that the Authentication Service is set to 'InternetIdentityAuthentication'. Finish the creation of the service domain.

    Read the article

  • Mobile BI Comes of Age

    - by rich.clayton(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} One of the hot topics in the Business Intelligence industry is mobility.  More specifically the question is how business can be transformed by the iPhone and the iPad.  In June 2003, Gartner predicted that Mobile BI would be obsolete and that the technology was headed for the 'trough of disillusionment'.  I agreed with them at that time.  Many vendors like MicroStrategy and Business Objects jumped into the fray attempting to show how PDA's like Palm Pilots could be integrated with BI.  Their investments resulted in interesting demos with no commercial traction.  Why, because wireless networks and mobile operating systems were primitive, immature and slow. In my opinion, Apple's iOS has changed everything in Mobile BI.  Yes Blackberry, Android and Symbian and all the rest have their place in the market but I believe that increasingly consumers (not IT departments) influence BI decision making processes.  Consumers are choosing the iPhone and the iPad. The number of iPads I see in business meetings now is staggering.  Some use it for email and note taking and others are starting to use corporate applications.  The possibilities for Mobile BI are countless and I would expect to see iPads enterprise-wide over the next few years.   These new devices will provide just-in-time access to critical business information.  Front-line managers interacting with customers, suppliers, patients or citizens will have information literally at their fingertips. I've experimented with several mobile BI tools.  They look cool but like their Executive Information System (EIS) predecessors of the 1990's these tools lack a backbone and a plausible integration strategy.  EIS was a viral technology in the early 1990's.  Executives from every industry and job function were showcasing their dashboards to fellow co-workers and colleagues at the country club.  Just like the iPad, every senior manager wanted one.  EIS wasn't a device however, it was a software application.   EIS quickly faded into the software sunset as it lacked integration with corporate information systems.  BI servers  replaced EIS because the technology focused on the heavy data lifting of integrating, normalizing, aggregating and managing large, complex data volumes.  The devices are here to stay. The cute stand-alone mobile BI tools, not so much. If all you're looking to do is put Excel files on your iPad, there are plenty of free tools on the market.  You'll look cool at your next management meeting but after a few weeks, the cool factor will fade away and you'll be wondering how you will ever maintain it.  If however you want secure, consistent, reliable information on your iPad, you need an integration strategy and a way to model the data.  BI Server technologies like the Oracle BI Foundation is a market leading approach to tackle that issue. I liken the BI mobility frenzy to buying classic cars.  Classic Cars have two buying groups - teenagers and middle-age folks looking to tinker.  Teenagers look at the pin-stripes and the paint job while middle-agers (like me)  kick the tires a bit and look under the hood to check out the quality and reliability of the engine.  Mobile BI tools sure look sexy but don't go very far without an engine and a transmission or an integration strategy. The strategic question in Mobile BI is can these startups build a motor and transmission faster than Oracle can re-paint the car?  Oracle has a great engine and a transmission that connects to all enterprise information assets.  We're working on the new paint job and are excited about the possibilities.  Just as vertical integration worked in the automotive business, it too works in the technology industry.

    Read the article

  • SQL SERVER – Windows File/Folder and Share Permissions – Notes from the Field #029

    - by Pinal Dave
    [Note from Pinal]: This is a 29th episode of Notes from the Field series. Security is the task which we should give it to the experts. If there is a small overlook or misstep, there are good chances that security of the organization is compromised. This is very true, but there are always devils’s advocates who believe everyone should know the security. As a DBA and Administrator, I often see people not taking interest in the Windows Security hiding behind the reason of not expert of Windows Server. We all often miss the important mission statement for the success of any organization – Teamwork. In this blog post Brian tells the story in very interesting lucid language. Read On! In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words. When I talk security among database professionals, I find that most have at least a working knowledge of how to apply security within a database. When I talk with DBAs in particular, I find that most have at least a working knowledge of security at the server level if we’re speaking of SQL Server. One area I see continually that is weak is in the area of Windows file/folder (NTFS) and share permissions. The typical response is, “I’m a database developer and the Windows system administrator is responsible for that.” That may very well be true – the system administrator may have the primary responsibility and accountability for file/folder and share security for the server. However, if you’re involved in the typical activities surrounding databases and moving data around, you should know these permissions, too. Otherwise, you could be setting yourself up where someone is able to get to data he or she shouldn’t, or you could be opening the door where human error puts bad data in your production system. File/Folder Permission Basics: I wrote about file/folder permissions a few years ago to give the basic permissions that are most often seen. Here’s what you must know as a minimum at the file/folder level: Read - Allows you to read the contents of the file or folder. Having read permissions allows you to copy the file or folder. Write  – Again, as the name implies, it allows you to write to the file or folder. This doesn’t include the ability to delete, however, nothing stops a person with this access from writing an empty file. Delete - Allows the file/folder to be deleted. If you overwrite files, you may need this permission. Modify - Allows read, write, and delete. Full Control - Same as modify + the ability to assign permissions. File/Folder permissions aggregate, unless there is a DENY (where it trumps, just like within SQL Server), meaning if a person is in one group that gives Read and antoher group that gives Write, that person has both Read and Write permissions. As you might expect me to say, always apply the Principle of Least Privilege. This likely means that any additional permission you might add does not need Full Control. Share Permission Basics: At the share level, here are the permissions. Read - Allows you to read the contents on the share. Change - Allows you to read, write, and delete contents on the share. Full control - Change + the ability to modify permissions. Like with file/folder permissions, these permissions aggregate, and DENY trumps. So What Access Does a Person / Process Have? Figuring out what someone or some process has depends on how the location is being accessed: Access comes through the share (\\ServerName\Share) – a combination of permissions is considered. Access is through a drive letter (C:\, E:\, S:\, etc.) – only the file/folder permissions are considered. The only complicated one here is access through the share. Here’s what Windows does: Figures out what the aggregated permissions are at the file/folder level. Figures out what the aggregated permissions are at the share level. Takes the most restrictive of the two sets of permissions. You can test this by granting Full Control over a folder (this is likely already in place for the Users local group) and then setting up a share. Give only Read access through the share, and that includes to Administrators (if you’re creating a share, likely you have membership in the Administrators group). Try to read a file through the share. Now try to modify it. The most restrictive permission is the Share level permissions. It’s set to only allow Read. Therefore, if you come through the share, it’s the most restrictive. Does This Knowledge Really Help Me? In my experience, it does. I’ve seen cases where sensitive files were accessible by every authenticated user through a share. Auditors, as you might expect, have a real problem with that. I’ve also seen cases where files to be imported as part of the nightly processing were overwritten by files intended from development. And I’ve seen cases where a process can’t get to the files it needs for a process because someone changed the permissions. If you know file/folder and share permissions, you can spot and correct these types of security flaws. Given that there are a lot of database professionals that don’t understand these permissions, if you know it, you set yourself apart. And if you’re able to help on critical processes, you begin to set yourself up as a linchpin (link to .pdf) for your organization. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Summary of the Solaris 11 webcast's livechat QnA session

    - by Karoly Vegh
    This is a followup post to the previous summary on the "What's new with Solaris 11 since the launch" webcast. That webcast has had a chatroom for a live Questions and Answers session running. I went through the archive of those and compiled a list of some of the (IMHO) most relevant and most frequently asked questions, I'd like to share. This is the first part, covering the QnA of Session I and II of the webcast, in a followup post we can have a look of the rest of the sessions if required - let me know in the comments. Also, should you have questions, as usual, feel free to ask those there, too.  ...and here come the answered questions:  When will Exadata be based on Solaris in place of Oracle Enterprise Linux?Exadata offers both Solaris 11 or Oracle Enterprise Linux.  The choice can be made at deployment time based on your OS needs.What are all other benefits and futures avilable in solaris 11 (cloud O.S.) compared to cloud based Red Hat Linux and Windows?suggest you check out our cloud white paper for a view of this. Also the OTN Solaris 11 page has some good articles. Here are the links:  http://www.oracle.com/technetwork/server-storage/solaris11/documentation/o11-106-sol11-cloud-501066.pdf http://www.oracle.com/technetwork/server-storage/solaris11/overview/index.htmlWill 11.1 have a more complete IPS respository for Oracle and FOSS software?Yes, we are adding additional packages to the various package repositories. Since Solaris 11 was launched, both the Oracle Solaris Studio tools as well as Oracle Solaris Cluster have been made available along with numerous new FOSS packages. We will continue to be adding additional Oracle products and open source packages in the future. Will Exadata be based on Sparc in place of intel-amd x86 in next future ?We can't publically discuss futures, but we actually have a SPARC version of Exadata today, it's called SuperCluster, this is such a powerfull multipurpose system that it actually have multiple personalities built into one system: Exadata, Exalogic, and it can be a general purpose platform if you want. Have I understood this right? Livepatching KSplice-style is coming to Solaris 11 too?We're looking at that for certain types of Solaris patches in the future.Will there be a security framework like SST/JASS for Solaris 11?We can't talk about the future projects on a public forum, but we recognize the need for SST/JASS and want to address this as soon as possible. On the other side there are a whole bunch of "best practices" that are now embedded into Solaris 11 by default, so out of the box Solaris 11 should already address part of what SST/JASS gave you. (For example we did a lot of work on improving the auditing performance so that we can now have it turned on by default). On x86 can install VirtualBox in a Zone and use that to host other OSes.Yes, this was one of the first things we made sure would work when we acquired VirtualBox when we were still Sun Microsystems. If I have a Solaris 11 Control Domain on a T-series, can I run a Solaris 10 Ldom with Solaris 8 branded containers?Yes, you can.Is Oracle Solaris free or do we need to purchase?Solaris is free, the entitlement to run it comes either with a Sun system (new or historical) or for 3rd party systems the entitlement comes with a support contract. Note that for production use you will be expected to get a support contract. If you don't want to use the Solaris system (Sun or 3rd party) for production use (i.e. development) you can get an OTN license on the Oracle Technical Network website. Will encryption and deduplication both work on a share?This should work at the same time. What approaches does Solaris use to monitor usage?There are many different tools in Solaris to monitor usage. The main ones are the "stats" (vmstat, mpstat, prstat, ...), the kstat interface, and DTrace (to get details you couldn't see before). And then there are layered tools that can interface with these tools (Ops Center, BMC, CA, Tivoli, ...) Apart little-endian, big-endian how is it easy to port Solaris applications on Sparc to x86 and vice-versa ?Very easy. Except for certain hardware specific applications (those that utilize hardware specific drivers), all of the same Oracle Solaris APIs exist for all architectures. Is IPS based patching aware of the fact that zones can reside on ZFS and move from one physical server to another ?IPS is definitely aware of zones and uses ZFS to support boot environments for non-global zones in the same way that's used for the global zone. With respect to moving a zone from one physical server to another, Solaris 11 supports to the same zone attach/deattach method that was introduced in Solaris 10. Is vnic support in Ldoms planned?This is currently being investigated for a future LDOM release. Is it possible with the new patching system to build a system later with the same patch level as a system built a few months earlier?Yes, you can choose/define exactly which version should go to the system and it will always put the same bits in place. The technical answer is that you choose the version of the "entire" package you want on the system and the rest flows from there. Is it in the plans to allow zones to add/remove zpools to running zones dynamically in future updates?Work in this area is currently under investigation. Any plans to realese Solaris 11 source code? i.e. opensolaris?We currently can't comment on publicly releasing the source code. If you need/want this access please let your Oracle account team know. What about VirtualBox and Solaris11 for virtualization?Solaris 11 works great with VirtualBox, as both a client and a host system. Will Oracle DB software eventually be supplied as IPS packages? When?We don't have a date yet but this is actively being worked on. What are the new artifacts in Oracle Solaris 11 than the previous versions?There are quite a few actually. The best start is to look at our "Evaluate Solaris 11" page, and there you also can find a Transition Guide. http://www.oracle.com/technetwork/server-storage/solaris11/overview/evaluate-1530234.html So, this seems just like RedHat's YUM environment?IPS offers certain features beyond those in YUM or other packaging systems. For example, IPS works with ZFS and Solaris Boot Environments to provide a safe environment for software lifecycle management so that changes can be reverted by switching to an older boot environment. With Zones on solaris 11, can I do paravirtualitation?The great thing about zones is you don't *need* paravirtualization. You're making the same direct kernel calls that you would outside of a zone.  It's an incredibly significant performance win over hypervisor-based virtualization. Are zones/containers officially supported to run Oracle Databases?  EBIZ?Hi Calvin, the answer is yes, here is the support matrix for DB:  http://www.oracle.com/technetwork/database/virtualizationmatrix-172995.html I've found some nasty bugs in Solaris 11 (one of which today) that have been fixed in community forks (i.e., Illumos). Will Oracle ever restart collaboration with the community?We continue to work with the community, just not as open on all projects as we did before (For example IPS is an open project) and the source of more than half of the Solaris packages is posted on our opensource websites. I can't comment on what we will do in the future. And with regards to bugs please file them through the support organization and we will get them resolved. Is zpool vdev removal on-the fly now possible ?This issue is actively being investigated although we don't have a date for when this feature will be available. Is pgstat now the official replacement for corestat ?It's intended to provide similar functionality Where are the opensource website?For Oracle Solaris, visit http://www.oracle.com/technetwork/opensource/systems-solaris-1562786.html As a cloud-scale virtualization, is it going to be easier to move zones between machines? maybe even automatic in case of a hardware failure?Hi Gashaw, we already have customers that have implemented what they refer to as "flying zones" that they can move around very easily. They use Solaris Cluster to do this. What about VMware vMotion like feature?We have secure live migration with both Logical Domains on SPARC T series systems, and with Oracle VM on x86 systems. When running Solaris 10/11 on an enterprise server with a lot of zones, what are best practises commands to show the system is running fine? (has enough hardware resources). For example CPU / Memory / I/O / system load. What are the recommended values?For Solaris 11, look into the new zonestat(1M) command that provides a great deal of information about zone utilization. In addition, there is new work underway in providing additional observability in areas such as per-zone file system I/O. Java optimizations done with Solaris 11? For X86 platforms too? Where can I find more detail about this?There is lots of work that go into optimizing Java for Oracle Solaris 10 & 11 on both SPARC and x86. See http://www.oracle.com/technetwork/articles/servers-storage-dev/solarisforjavadevelop-168642.pdf What is meant by "ZFS Shadow Migration"?It's a way to migrate data from another file system to ZFS: http://docs.oracle.com/cd/E23824_01/html/E24456/filesystem-3.html Is flash archive available with S11?Flash archive is not.  There is a procedure for disaster recovery, and we're working on a modern archive-based deployment tool for a future update.  The disaster recovery tool is here: http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-091-sol-dis-recovery-489183.html  You can also use Distribution Constructor to build common golden images. Will solaris 11 be available on the ODA soon?The idea's under evaluation -- we'll share your interest with the team. What steps can be taken to ensure that breaches of security are identified quickly?There are a number of tools, including the "bart" tool and "pkg verify" to ensure that software has not been compromised.  Solaris Audit can also be used to detect unauthorized access.  You can also use Immutable Zones to protect against compromise.  There are a wide variety of security tools, and I've covered only a few. What is the relation from solaris to java 7 speed optimization?There is constant work done between the Oracle Solaris and Java teams on performance optimizations. See http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html for examples. What is the difference in the Solaris 11 installation compared to solaris 10 ? where i can find the document describing basic repository concepts ?The best place to start is: http://www.oracle.com/technetwork/server-storage/solaris11/index.html Hope you found the post useful. For questions, input, requests for the second half of the QnA, please find the comment section below.  -- charlie  

    Read the article

  • How to get rid of grub menu after boot?

    - by umpirsky
    Here is my /etc/default/grub: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_LINUX_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" I tried various things including: How do I hide the GRUB menu showing up in the beginning of boot? How to disable Grub's menu from showing up after failed boot http://www.itworld.com/software/306238/disable-grub-boot-menu-ubuntu-1210 But I still get grub menu each time I boot. My generated /boot/grub/grub.cfg: # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=auto load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=hidden set timeout=0 # Fallback hidden-timeout code in case the timeout_style feature is # unavailable. elif sleep --interruptible 0 ; then set timeout=0 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 45,51,53; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-ed6b32bc-ec1d-444c-a000-282fddd6d460' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Ubuntu 14.04 LTS (14.04) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-simple-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu 14.04 LTS (14.04) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' $menuentry_id_option 'osprober-gnulinux-advanced-ed6b32bc-ec1d-444c-a000-282fddd6d460' { menuentry 'Ubuntu (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-29-generic.efi.signed-root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-24-generic.efi.signed--ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro splash quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode) (on /dev/mapper/isw_beaaegcdjh_ASUS_OS2)' --class gnu-linux --class gnu --class os $menuentry_id_option 'osprober-gnulinux-/boot/vmlinuz-3.13.0-24-generic.efi.signed-root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet-ed6b32bc-ec1d-444c-a000-282fddd6d460' { insmod part_gpt insmod ext2 if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 else search --no-floppy --fs-uuid --set=root ed6b32bc-ec1d-444c-a000-282fddd6d460 fi linux /boot/vmlinuz-3.13.0-24-generic.efi.signed root=UUID=ed6b32bc-ec1d-444c-a000-282fddd6d460 ro recovery nomodeset splash quiet initrd /boot/initrd.img-3.13.0-24-generic } } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### menuentry 'System setup' $menuentry_id_option 'uefi-firmware' { fwsetup } ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >