Search Results

Search found 8982 results on 360 pages for 'delete'.

Page 109/360 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • CodePlex Daily Summary for Tuesday, August 28, 2012

    CodePlex Daily Summary for Tuesday, August 28, 2012Popular ReleasesImageServer: v1.1: This is the first version releasedChristoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Project-Templates-for-DotNetNuke.aspx If you need a project template for older versions of Visual Studio check out our previous releases.Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Math.NET Numerics: Math.NET Numerics v2.2.0: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Also available as NuGet packages: PM> Install-Package MathNet.Numerics PM> Install-Package MathNet.Numerics.FSharp New: instead of the special Silverlight build we now provide a portable version supporting .Net 4, Silverlight 5 and .Net Core (WinRT) 4.5: PM> Install-Package MathNet.Numerics.PortablePhalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3391 (September 2012): New features: Extended ReflectionClass libxml error handling, constants TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging Lot of fixes of project files, formatting, smart indent, colorization etc. Improved ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again. Also want to highlight again the breaking change introduced in 4.61 regarding the renaming of the DLL from AjaxMin.dll to AjaxMinLibrary.dll to fix strong-name collisions between the DLL and the EXE. Please be aware of this change and...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...MyRouter (Virtual WiFi Router): MyRouter 1.2.9: . Fix: Some missing changes for fixing the window subclassing crash. · Fix: fixed bug when Run MyRouter at the first Time. · Fix: Log File · Fix: improve performance speed application · fix: solve some Exception.Private cloud DMS: Essential server-client full package: Requirements: - SQL server >= 2008 (minimal Express - for Essential recommended) - .NET 4.0 (Server) - .NET 4.0 Client profile (Client) This version allow: - full file system functionality Restrictions: - Maximum 2 parallel users - No share spaces - No hosted business groups - No digital sign functionality - No ActiveDirectory connector - No Performance cache - No workflow - No messagingJavaScript Prototype Extensions: Release 1.1.0.0: Release 1.1.0.0 Add prototype extension for object. Add prototype extension for array.Glyphx: Version 1.2: This release includes the SdlDotNet.dll dependency in the setup, which you will need.TFS Project Test Migrator: TestPlanMigration v1.0.0: Release 1.0.0 This first version do not create the test cases in the target project because the goal was to restore a Test Plan + Test Suite hierarchy after a manual user deletion without restoring all the Project Collection Database. As I discovered, deleting a Test Plan will do the following : - Delete all TestSuiteEntry (the link between a Test Suite node and a Test Case) - Delete all TestSuite (the nodes in the test hierarchy), including root TestSuite - Delete the TestPlan Test c...ERPStore eCommerce FrontOffice: ERPStore.Core V4.0.0.2 MVC4 RTM: ERPStore.Core V4.0.0.2 MVC4 RTM (Code Source)ZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedNew ProjectsA Constraint Propogation Solver in F#: An experimental implementation of the Variable Consistency "Bucket Elimination" algorithm for constraint propogationAirline Pilot Academy: Taller de Sistemas de Información - UCB ArchiveManageSys: Something about archive managementCriteria Workflow Engine: A C++ Workflow Engine: Desing and execute business process.DnnDash Service: The DnnDash project enables administrators of DotNetNuke websites to view any installed DotNetNuke Dashboard components via other devices.Dynamics NAV UniWPF Addin: This project is Addin control for Microsoft Dynamics NAV 2009, allowing developers to put WPF controls directly to NAV page.Essai Salon de Chat: petit projet de communication basé sur une structure server / client(multi)High performance C# byte array to hex string to byte array: The performance key point for each to/from conversion is the (perpetual) repetition of the same if blocks and calculations...ImageCloudLock Backup Solution: ImageCloudLock 2012 is designed to backup your important items, like photos, financial documents, PDFs, excel documents and personal items to the Cloud service.Login with Facebook in ASP.Net MVC3 & Get data from Facebook User: This project is useful for ASP.Net MVC developer. For login with Facebook in ASP.Net MVC3 & Get data from user's facebook account.Lucy2D: Projet for developing 2D Games based on WPF and XNA. The point is to minimize effort for developers and fast prototyping.Main Street: Human Resources software for small business. Using C# and SQL Server Express.ManagedZFS: A managed implementation of the ZFS filesystem. Reliability, performance, and manageability. Also, *block-pointer rewrite* !!!Metro English Persian Dictionary: This is an English - Persian (Farsi) dictionary designed and developed for Windows 8 Metro User Interface which contains over 50000 words. My Personal Site: My Personal SiteNeverball Framework: Neverball FrameworkSample code: This project is simple a common location for me to store project skeletons and snippets for help bootstrapping other projects.Smart Card Fuzzer: SCFuzz is a smart card middleware fuzz testing tool (fuzzer). Using API hooking, SCFuzz modifies data returned by the card in order to find bugs in the host.Test Activation: Start looking into it.Vector, a .net generic collection to use instead of a List - in niche cases.: The Vector can be used as a replacement for a List if a large number of elements must be stored, and element inserts and deletes are frequent.WCF duplex Message: This is test project uses WCF to implement message broadcast.WCF! WTF?: Getting to know WCFZune to Lync Now Playing: Zune to Lync Now Playing is a simple application that let you display on your Lync Personal Note, the current song playing in Zune Player.

    Read the article

  • Communication between WCF service [library] and Self-host [Winform]

    - by Mur Haf Soz
    Introduction: I have a WCF service library and a self-host Winform. Service features is File explorer including (copy, move, delete, new folder, delete... etc) and Task Manager (run, kill, update list). Then now I want to add other features like chatting between self-host and client, send an image from client to self-host so when it received, it is shown in a pictureBox in a new form. Till now I have two endpoints for (Task Manager, File Manager) that runs under one service "MainService". And I set up all the connections using DotNet 4.0 WCF Configuration and Wizards, and I'm using netTcpBinding. Problem: I need to know how to communicate with between WCF service lib and self-host, so I can append a received chat from client on self-host form's textbox TextBoxChat. And also call a client callback from self-host when Send button clicked, to send the message from self-host textbox TextBoxMessage. Let's say this's self-host ChatForm So is it possible to do that in WCF? I would prefer to run ChatEndpoint under MainService, so all Endpoints use one port.

    Read the article

  • SOA Partner Community Workspace

    - by JuergenKress
    To share the latest information with the community we use the SOA Community Workspace (SOA Community membership required). At the workspace you can find training material, product presentations in ppt format, product roadmaps, sales kits, market kits, training calendar and many additional information. Please use this content in the spirit of our partnership, and do not share external confidential material (be aware of the OTN NDA). The workspace is organized by product categories in folders e.g. SOA or Business Process Management or Applications & Fusion Middleware. You can also use tags to navigate within the workspace. For large downloads we do recommend to map the workspace as a network drive or to use the ftp functions. Please be very careful when you use the workspace, as we granted everybody full access including to add and delete documents. Please do NOT delete any content. Each action creates e-mail alerts for subscribed users. You can unsubscribe these alerts at the admin page à navigate to the All Workspaces tab à click on the workspace à switch the subscription off. It would be great if you can continue to share your best practice and knowledge within the community. Therefore we also created the folder Presentations from Partners. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Workspace,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • ASP.NET MVC 2 from Scratch &ndash; Part 1 Listing Data from Database

    - by Max
    Part 1 - Listing Data from Database: Let us now learn ASP.NET MVC 2 from Scratch by actually developing a front end website for the Chinook database, which is an alternative to the traditional Northwind database. You can get the Chinook database from here. As always the best way to learn something is by working on it and doing something. The Chinook database has the following schema, a quick look will help us implementing the application in a efficient way. Let us first implement a grid view table with the list of Employees with some details, this table also has the Details, Edit and Delete buttons on it to do some operations. This is series of post will concentrate on creating a simple CRUD front end for Chinook DB using ASP.NET MVC 2. In this post, we will look at listing all the possible Employees in the database in a tabular format, from which, we can then edit and delete them as required. In this post, we will concentrate on setting up our environment and then just designing a page to show a tabular information from the database. We need to first setup the SQL Server database, you can download the required version and then set it up in your localhost. Then we need to add the LINQ to SQL Classes required for us to enable interaction with our database. Now after you do the above step, just use your Server Explorer in VS 2010 to actually navigate to the database, expand the tables node and then drag drop all the tables onto the Object Relational Designer space and you go you will have the tables visualized as classes. As simple as that. Now for the purpose of displaying the data from Employee in a table, we will show only the EmployeeID, Firstname and lastname. So let us create a class to hold this information. So let us add a new class called EmployeeList to the ViewModels. We will send this data model to the View and this can be displayed in the page. public class EmployeeList { public int EmployeeID { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public EmployeeList(int empID, string fname, string lname) { this.EmployeeID = empID; this.Firstname = fname; this.Lastname = lname; } } Ok now we have got the backend ready. Let us now look at the front end view now. We will first create a master called Site.Master and reuse it across the site. The Site.Master content will be <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.Master.cs" Inherits="ChinookMvcSample.Views.Shared.Site" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title></title> <style type="text/css"> html { background-color: gray; } .content { width: 880px; position: relative; background-color: #ffffff; min-width: 880px; min-height: 800px; float: inherit; text-align: justify; } </style> <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <asp:ContentPlaceHolder ID="head" runat="server"> </asp:ContentPlaceHolder> </head> <body> <center> <h1> My Website</h1> <div class="content"> <asp:ContentPlaceHolder ID="body" runat="server"> </asp:ContentPlaceHolder> </div> </center> </body> </html> The backend Site.Master.cs does not contain anything. In the actual Index.aspx view, we add the code to simply iterate through the collection of EmployeeList that was sent to the View via the Controller. So in the top of the Index.aspx view, we have this inherits which says Inherits="System.Web.Mvc.ViewPage<IEnumerable<ChinookMvcSample.ViewModels.EmployeeList>>" In this above line, we dictate that the page is consuming a IEnumerable collection of EmployeeList. So once we specify this and compile the project. Then in our Index.aspx page, we can consume the EmployeeList object and access all its methods and properties. <table class="styled" cellpadding="3" border="0" cellspacing="0"> <tr> <th colspan="3"> </th> <th> First Name </th> <th> Last Name </th> </tr> <% foreach (var item in Model) { %> <tr> <td align="center"> <%: Html.ActionLink("Edit", "Edit", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td align="center"> <%: Html.ActionLink("Details", "Details", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td align="center"> <%: Html.ActionLink("Delete", "Delete", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td> <%: item.Firstname %> </td> <td> <%: item.Lastname %> </td> </tr> <% } %> <tr> <td colspan="5"> <%: Html.ActionLink("Create New", "Create") %> </td> </tr> </table> The Html.ActionLink is a Html Helper to a create a hyperlink in the page, in the one we have used, the first parameter is the text that is to be used for the hyperlink, second one is the action name, third one is the parameter to be passed, last one is the attributes to be added while the hyperlink is rendered in the page. Here we are adding the id=”links” to the hyperlinks that is created in the page. In the index.aspx page, we add some jQuery stuff add alternate row colours and highlight colours for rows on mouse over. Now the Controller that handles the requests and directs the request to the right view. For the index view, the controller would be public ActionResult Index() { //var Employees = from e in data.Employees select new EmployeeList(e.EmployeeId,e.FirstName,e.LastName); //return View(Employees.ToList()); return View(_data.Employees.Select(p => new EmployeeList(p.EmployeeId, p.FirstName, p.LastName))); } Let us also write a unit test using NUnit for the above, just testing EmployeeController’s Index. DataClasses1DataContext _data; public EmployeeControllerTest() { _data = new DataClasses1DataContext("Data Source=(local);Initial Catalog=Chinook;Integrated Security=True"); }   [Test] public void TestEmployeeIndex() { var e = new EmployeeController(_data); var result = e.Index() as ViewResult; var employeeList = result.ViewData.Model; Assert.IsNotNull(employeeList, "Result is null."); } In the first EmployeeControllerTest constructor, we set the data context to be used while running the tests. And then in the actual test, We just ensure that the View results returned by Index is not null. Here is the zip of the entire solution files until this point. Let me know if you have any doubts or clarifications. Cheers! Have a nice day.

    Read the article

  • Sync Google Contacts with QuickBooks

    - by dataintegration
    The RSSBus ADO.NET Providers offer an easy way to integrate with different data sources. In this article, we include a fully functional application that can be used to synchronize contacts between Google and QuickBooks. Like our QuickBooks ADO.NET Provider, the included application supports both the desktop versions of QuickBooks and QuickBooks Online Edition. Getting the Contacts Step 1: Google accounts include a number of contacts. To obtain a list of a user's Google Contacts, issue a query to the Contacts table. For example: SELECT * FROM Contacts. Step 2: QuickBooks stores contact information in multiple tables. Depending on your use case, you may want to synchronize your Google Contacts with QuickBooks Customers, Employees, Vendors, or a combination of the three. To get data from a specific table, issue a SELECT query to that table. For example: SELECT * FROM Customers Step 3: Retrieving all results from QuickBooks may take some time, depending on the size of your company file. To narrow your results, you may want to use a filter by including a WHERE clause in your query. For example: SELECT * FROM Customers WHERE (Name LIKE '%James%') AND IncludeJobs = 'FALSE' Synchronizing the Contacts Synchronizing the contacts is a simple process. Once the contacts from Google and the customers from QuickBooks are available, they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete contacts in either data source as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for QuickBooks V3, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the QuickBooks ADO.NET Data Provider V3, which can be obtained here.

    Read the article

  • Files backup utility with incremental backups that would keep backup device clean

    - by Wojtek
    I've tested a few of backup utilities and still haven't found one that would satisfy me. Almost every one of them has two options: - full backup - not an option to use frequently - incremental backup - seems right, but there's one thing about it: Incremental backup builds on a base of a full backup, backing up only those files, that were created/changed. The thing is, that after some time you've got a lot of unwanted files from the old backups bloating your backup device. Also, if you'd accidentally delete your full (first) backup file, then the differential backups would be corrupted (you wouldn't be able to restore them). The thing I'm looking for is a program, that would backup files simply by copying them. It would check the backup device whether it contains the file (unchanged): - if yes, it should proceed to the next file (we've got current version backed up) - if no, it would copy the file to the backup device - if the device contains a file that is no longer on our disk, the program would delete it from the backup device Is there any such utility, that would work this way? If not, do you have any hints on how to backup fairly big amounts of data (around 20gb) quite frequently with incremental backups and not be exposed to those unwanted effects of backup size puffing up?

    Read the article

  • How to determine if you should use full or differential backup?

    - by Peter Larsson
    Or ask yourself, "How much of the database has changed since last backup?". Here is a simple script that will tell you how much (in percent) have changed in the database since last backup. -- Prepare staging table for all DBCC outputs DECLARE @Sample TABLE         (             Col1 VARCHAR(MAX) NOT NULL,             Col2 VARCHAR(MAX) NOT NULL,             Col3 VARCHAR(MAX) NOT NULL,             Col4 VARCHAR(MAX) NOT NULL,             Col5 VARCHAR(MAX)         )   -- Some intermediate variables for controlling loop DECLARE @FileNum BIGINT = 1,         @PageNum BIGINT = 6,         @SQL VARCHAR(100),         @Error INT,         @DatabaseName SYSNAME = 'Yoda'   -- Loop all files to the very end WHILE 1 = 1     BEGIN         BEGIN TRY             -- Build the SQL string to execute             SET     @SQL = 'DBCC PAGE(' + QUOTENAME(@DatabaseName) + ', ' + CAST(@FileNum AS VARCHAR(50)) + ', '                             + CAST(@PageNum AS VARCHAR(50)) + ', 3) WITH TABLERESULTS'               -- Insert the DBCC output in the staging table             INSERT  @Sample                     (                         Col1,                         Col2,                         Col3,                         Col4                     )             EXEC    (@SQL)               -- DCM pages exists at an interval             SET    @PageNum += 511232         END TRY           BEGIN CATCH             -- If error and first DCM page does not exist, all files are read             IF @PageNum = 6                 BREAK             ELSE                 -- If no more DCM, increase filenum and start over                 SELECT  @FileNum += 1,                         @PageNum = 6         END CATCH     END   -- Delete all records not related to diff information DELETE FROM    @Sample WHERE   Col1 NOT LIKE 'DIFF%'   -- Split the range UPDATE  @Sample SET     Col5 = PARSENAME(REPLACE(Col3, ' - ', '.'), 1),         Col3 = PARSENAME(REPLACE(Col3, ' - ', '.'), 2)   -- Remove last paranthesis UPDATE  @Sample SET     Col3 = RTRIM(REPLACE(Col3, ')', '')),         Col5 = RTRIM(REPLACE(Col5, ')', ''))   -- Remove initial information about filenum UPDATE  @Sample SET     Col3 = SUBSTRING(Col3, CHARINDEX(':', Col3) + 1, 8000),         Col5 = SUBSTRING(Col5, CHARINDEX(':', Col5) + 1, 8000)   -- Prepare data outtake ;WITH cteSource(Changed, [PageCount]) AS (     SELECT      Changed,                 SUM(COALESCE(ToPage, FromPage) - FromPage + 1) AS [PageCount]     FROM        (                     SELECT CAST(Col3 AS INT) AS FromPage,                             CAST(NULLIF(Col5, '') AS INT) AS ToPage,                             LTRIM(Col4) AS Changed                     FROM    @Sample                 ) AS d     GROUP BY    Changed     WITH ROLLUP ) -- Present the final result SELECT  COALESCE(Changed, 'TOTAL PAGES') AS Changed,         [PageCount],         100.E * [PageCount] / SUM(CASE WHEN Changed IS NULL THEN 0 ELSE [PageCount] END) OVER () AS Percentage FROM    cteSource

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • DropSpace Syncs Android Files to Dropbox

    - by ETC
    DropSpace is a free Android application that fixes the primary issue that plagues the official Dropbox app for Android–the lack of true file synchronization. Grab a copy of DropSpace and start enjoying true file syncing on the go. The official Dropbox app is limited to grabbing files from your Dropbox account or pushing files from your phone to your Dropbox account. Actual file synchronization, this manual push/pull model aside, is nowhere to be found. DropSpace fills that gap by enabling file synchronization between your SD card directories and your Dropbox directories. It’s packed with handy features including restricting file syncing to Wi-Fi connection only (great if you don’t want to chew up your very limited data plan) as well as numerous toggles for various settings like whether it should delete remote files if the local file is deleted, how often it should run the sync service, and more. Hit up the link below to grab a copy and take it for a test drive. DropSpace is free and works wherever Android does; Dropbox account required. DropSpace [via Addictive Tips] Latest Features How-To Geek ETC Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 Access the Options for Your Favorite Extensions Easier in Firefox Don’t Sleep Keeps Your Windows Machine Awake DropSpace Syncs Android Files to Dropbox Field of Poppies Wallpaper The History Of Operating Systems [Infographic] DriveSafe.ly Reads Your Text Messages Aloud

    Read the article

  • call function between classes [closed]

    - by aziz joh
    hello I have 3 classes class A, B, and C class A is the main class and content the main function also, i call class B and class C in the main as b1,b2 and c1. in class B there is a vector (V) has list of int. and 3 functions Add, get and delete delete. all the thing in the class is public. in class C i have function that need to (B::get) from b. what I want is that how I can make c1 call get of b1 to return the value of the V in b1 after that use add of b2 to add new item in V of b2. Thanks in advance This is an example class a{ int main(){ b b1,b2; c c1; b1.add(10); b1.add(20); c1.start(); }} class b{ vector<int> v; void add(int i){ v.push_back(i)} int get(){int i=v.at(0); return i;} } class c{// take something from b1 and add it to b2 void play(){ int i=b.get();//should take it from b1 b.add(i*2);//should add it to b2 }} please I need your help I been searching to solve this problem for days.

    Read the article

  • How to find keycodes for Fn + keys in Ubuntu 11.10

    - by budwiser
    I'm trying to find out the keycode for Fn+? keypress (left arrow). Xev outputs FocusOut event, serial 36, synthetic NO, window 0x3c00001,    mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 36, synthetic NO, window 0x3c00001,    mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 36, synthetic NO, window 0x0,    keys:  4294967213 0   0   0   0   0   0   0   0   0   0   0   0   0   0   0              0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   If it is telling me the keycode here, I'm not able to interpret it so help would be appreciated. I'm also curious for finding out if it's possible to bind something to Fn+Del but when trying out this combination, Xev outputs KeyPress event, serial 36, synthetic NO, window 0x3c00001, root 0xad, subw 0x0, time 1984903, (-666,480), root:(53,533), state 0x0, keycode 119 (keysym 0xffff, Delete), same_screen YES, XLookupString gives 1 bytes: (7f) " " XmbLookupString gives 1 bytes: (7f) " " XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x3c00001, root 0xad, subw 0x0, time 1985008, (-666,480), root:(53,533), state 0x0, keycode 119 (keysym 0xffff, Delete), same_screen YES, XLookupString gives 1 bytes: (7f) " " XFilterEvent returns: False which is exactly the same as pressing del without Fn. So, summary for short How can I find keycode for Fn+? (left arrow)? Is it even possible to bind something to Fn+Del or am I facing windmills here?

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • ASP.Net Fails to Detect IE10 without .Net Hotfix

    - by Ben Barreth
    Benny Mathew recently alerted us that he couldn’t create, edit or delete posts on GeeksWithBlogs in IE10 (Windows 8). It turns out the problem is that ASP.Net fails to detect IE10 causing a javascript error on postback. We’ll be applying a hotfix to the .Net framework on GWB shortly to fix this issue. In the meantime you can use the simple workaround outlined below. (Note that if you create posts using Windows Live Writer you won’t have this issue creating posts). Log into your GWB Account and go to the “Posts” page. Hit F12 to bring up the developer window in IE10. Click on the ‘Browser Mode’ option and change it to IE9. You should now be able to create/edit/delete posts in GWB. Note this also fixes any other sites in IE10 that might not yet have the hotfix applied. You can tell if the hotfix is the likely culprit if you’re using IE10 and see the following error in the Web Developers Console area: SCRIPT5009: '__doPostBack' is undefined Let us know ASAP if there are other issues you are experiencing that aren’t fixed by this workaround!

    Read the article

  • Database Vault integration available

    - by Anthony Shorten
    One of the major features of Oracle Utilities Application Framework V4.1 is the provision of a base solution for integration to the Database Vault product. Database Vault is part of Oracle’s security portfolio of product and allows database user permissions to be locked down to only allow appropriate users appropriate access to the product data. By default, when you install the product database, administrators and SYSDBA users have full DML (SELECT, INSERT, UPDATE and DELETE access) to the schemas they own and in the case of the SYSDBA users, all schemas on the database. This can be perceived as an issue. Database Vault allows an additional layer of security to disable inappropriate access. In Oracle Utilities Application Framework, a prebuilt Database Vault solution has been provided to provide base DML access to product data for product users only. The solution is shipped with the database installation files and includes a set of SQL files to create, disable, enable and delete the Database Vault objects. The solution contains a Database Vault Realm, RuleSets, Rules and Command Rules that can be used as is or extended to meet site specific needs. The solution is consistent with other Database Vault solutions provided for other Oracle applications such as PeopleSoft, E-Business Suite, JD-Edwards and Siebel. Customers familiar with the database vault solutions for those products will recognize the similarities between the solutions. For more details of the solution, refer to the Database Vault Integration for Oracle Utilities Application Framework Based Products on My Oracle Support at KB Id: 1290700.1.

    Read the article

  • Why Wouldn't Root Be Able to Change a Zone's IP Address in Oracle Solaris 11?

    - by rickramsey
    You might assume that if you have root access to an Oracle Solaris zone, you'd be able to change the root's IP address. If so, you'd proceed along these lines ... First, you'd log in: root@global_zone:~# zlogin user-zone Then you'd remove the IP interface: root@user-zone:~# ipadm delete-ip vnic0 Next, you'd create a new IP interface: root@user-zone:~# ipadm create-ip vnic0 Then you'd assign the IP interface a new IP address (10.0.0.10): root@user-zone:~# ipadm create-addr -a local=10.0.0.10/24 vnic0/v4 ipadm: cannot create address: Permission denied Why would that happen? Here are some potential reasons: You're in the wrong zone Nobody bothered to tell you that you were fired last week. The sysadmin for the global zone (probably your ex-girlfriend) enabled link protection mode on the zone with this sweet little command: root@global_zone:~# dladm set-linkprop -p \ protection=mac-nospoof,restricted,ip-nospoof vnic0 How'd your ex-girlfriend learn to do that? By reading this article: Securing a Cloud-Based Data Center with Oracle Solaris 11 by Orgad Kimchi, Ron Larson, and Richard Friedman When you build a private cloud, you need to protect sensitive data not only while it's in storage, but also during transmission between servers and clients, and when it's being used by an application. When a project is completed, the cloud must securely delete sensitive data and make sure the original data is kept secure. These are just some of the many security precautions a sysadmin needs to take to secure data in a cloud infrastructure. Orgad, Ron, and Richard and explain the rest and show you how to employ the security features in Oracle Solaris 11 to protect your cloud infrastructure. Part 2 of a three-part article on cloud deployments that use the Oracle Solaris Remote Lab as a case study. About the Photograph That's the fence separating a small group of tourist cabins from a pasture in the small town of Tropic, Utah. Follow Rick on: Personal Blog | Personal Twitter | Oracle Forums   Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • CodePlex Daily Summary for Sunday, December 02, 2012

    CodePlex Daily Summary for Sunday, December 02, 2012Popular ReleasesD3 Loot Tracker: 1.5.6: Updated to work with D3 version 1.0.6.13300DirectQ: DirectQ II 2012-11-29: A (slightly) modernized port of Quake II to D3D9. You need SM3 or better hardware to run this - if you don't have it, then don't even bother. It should work on Windows Vista, 7 or 8; it may also work on XP but I haven't tested. Known bugs include: Some mods may not work. This is unfortunately due to the nature of Quake II's game DLLs; sometimes a recompile of the game DLL is all that's needed. In any event, ensure that the game DLL is compatible with the last release of Quake II first (...Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFDocument.Editor: 2013.5: Whats new for Document.Editor 2013.5: New Read-only File support New Check For Updates support Minor Bug Fix's, improvements and speed upsMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Facebook Windows 8 Sample: Facebook Windows 8 Sample: The current drop holds two versions of the sample: A basic version that uses a Facebook application to list the content of facebook page. A full version including the use of Bing Maps sdk for positioning the restaurant in a map, and showing how to get there. See Developing a Windows Store App to learn how to use the Bing Maps AJAX Control to add Bing Maps to your Windows Store app.Commerce Server Tools: Delete a Site (CS10): Updated from the Commerce Server 2002 version (Delete Site) to work with CS10. The tool will delete the CS Site, all associated resources, databases, IIS Sites, and folders disk.RaptorDB - The Document Store: v1.9.0: v1.9.0 - speed increase writing bitmap indexes to disk - bug fix hoot search with wildcards - bug fix datetime indexing with UTC time (all times are localtime) - upgrade to fastJSON v2.0.9 - upgrade to fastBinaryJSON v1.3.5 - changed CodeDOM to Reflection.Emit for MonoDroid compatibility - more optimized bitmap storage format (save offsets if smaller than WAH) - fixed path seperator character for monodroid and windows compatibility changed to Path.DirectorySeparatorChar - new generic Query i...Antenna Tracking Unit - Projet Tuteuré /w RFTronic: Présentation Projet: Ci-joint la présentation du projet par l'entreprise RFTronic.Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.New ProjectsAppWebStore: Application Store in ASP.NET MVCBaseCrafter: BaseCrafter ProjectBranding SharePoint 2013: An how-to project for Braning in SharePoint 2013Clrizr: A set of less files that automatically define color shades and font colors for use in LESS CSS enabled web applications. Code is located in /Styles/Clrizr/didxaza: proyecto para aprender a usar team fundation serverFastMapper - CONVENTION BASED MAPPER: Comming Soon Google Earth Wrapper: Google Earth c# wrapperKirikiri (TVP) 2 Core: Simplified Chinese Translation: Simplified Chinese translated version of Kirikiri (TVP) 2 Core (http://kikyou.info/tvp), based on the SVN Head development version.Linq to CRM for Silverlight: Linq to CRM framework allows Silverlgith application communicate with MS Dynamics CRM 2011 through the "LINQ to CRM" ORM layer.MovieBuddy: a simple movie UIMyPractice: Some small applications which mainly use microsoft technology, Win8 Metro, Javascript, WCF, C# and etc.MySubstitutionCipher: Substitution cipher educational programOmega Game Engine: Omega Game Engine Engine para criação de jogos 2D e 3D Direcx9/10OpenXML PowerPoint Generation: Sample Project for the use of OpenXML API 2.5 with PowerPointpdh2.0? ???? ??? Perfomance_counter? ?????.: pdh 2.0 ? ???? ?? ??.Riksdagsappen: Detta projekt ämnar att bygga en Windows 8 Store App vars mål är att sprida medvetenhet om Sveriges riksdag, dess ledamöter och deras arbete. Service Stack Docs: Maintain and generate documentation for Service Stack services directly from code.SharpDX for Rastertek tutorials: This project is introduction to the SharpDX by following the Rastertek tutorials which are written in C++.Simple Set for .net: simple set class makes it easier for students of computer science to manage sets and set related operations. It's developed in C#.Spk.Controls: The Spk.Controls is a visual control library for .NET Windows Forms.Stateless Designer: Visual Studio extension to support visual design of stateless state machinesTeamWorkProject: this project for team working training out of CompanyUsing the Microsoft Kinect to control GoogleMap: The project is a WPF application that uses Microsoft Kinect to control google maps. Feel free to learn WPF MVVM pattern and Kinect development from it!VR Player: VR Player is an experimental Virtual Reality Media Player for Head-Mounted Display devices like the Oculus Rift.VS Tool for WSS 3.0: Visual Studio (2005 and 2008) add-ons for WSS. Included: - schema.xml explorer?????????? Microsoft Office 2013 ? 1? – ????? ???????????: ?????? ???????? ???????? ??? ???????? ? ????? ??????????? ????? ??????? ?? TechEd Russia 2012.??UBBCODE: PHP????UBBCODE????,??????: 1.??????(10px ? 24px); 2.????; 3.?????; 4.??????; 5.??????; 6.??????(????????????); 7.??????; 8.?????; 9.?????; 10.???QQ??,??????; 11.???????(?????,??????????); 12.?????????; 13.????????; 14.?????????; 15.?????????; 16.?????????。??????QrPortal: ???????Summary??: fff

    Read the article

  • Web browser downloads only open target folders - cannot open files

    - by Pavlos G.
    After installing xubuntu packages in order to check xfce, I reverted back to gnome2. During the first login, I noticed that thunar was now selected as the default file manager. Preferred applications menu is also missing now, so I could not set nautilus as the default. I removed all the xubuntu packages (including thunar) and then when I tried to open a folder, I was asked to select the default file manager - that's how I got nautilus back. The next problem I'm now facing has to do with the downloaded files from web browsers: Open and Open containing folder options produce exactly the same result. If I double-click on a file, it'll just open the containing folder, instead of opening the file with it's associated application (e.g. libreoffice writer for .doc,.odt, smplayer for .avi,.wmv, etc). The problem happens both in Firefox and Chrome. Through nautilus, all files open correctly. Up until now I've tried the following: Delete/recreate mimeTypes.rdf in my FF profile Create a new profile in FF Delete/recreate ~/.local/share/applications/mimeapps.list Already checked this similar article None of them worked. Any ideas on the issue would be appreciated.

    Read the article

  • Generalise variable usage inside code

    - by Shirish11
    I would like to know if it is a good practice to generalize variables (use single variable to store all the values). Consider simple example Strings querycre,queryins,queryup,querydel; querycre = 'Create table XYZ ...'; execute querycre ; queryins = 'Insert into XYZ ...'; execute queryins ; queryup = 'Update XYZ set ...'; execute queryup; querydel = 'Delete from XYZ ...'; execute querydel ; and Strings query; query= 'Create table XYZ ... '; execute query ; query= 'Insert into XYZ ...'; execute query ; query= 'Update XYZ set ...'; execute query ; query= 'Delete from XYZ ...'; execute query ; In first case I use 4 strings each storing data to perform the actions mentioned in their suffixes. In second case just 1 variable to store all kinds the data. Having different variables makes it easier for someone else to read and understand it better. But having too many of them makes it difficult to manage. Also does having too many variables hamper my performance?

    Read the article

  • Best scripting language for project [on hold]

    - by Dave
    This is a subjective question, but I don't know where else to ask it. I'd appreciate it if someone could direct me to an appropriate scripting language for my project. I'm a little new at this so I'd appreciate any help. The project is a website that will display a list of photo subject groups (such as "nature" "people" "sports" etc) on the home page. The photos will all be in subdirectories of the main photo directory (photos) and each subject group will represent a subdirectory in photos. For example in directory photos there might be 3 subdirectories, "nature" "people" "sports" and in each of those subdirectories there will be the actual photos. The idea is that when the website owner wants to update/add/delete a subject group all he has to do is add, delete or update a subdirectory of the photos directory. This means, I think, that I need a scripting language that can read the directories and files in the website and then send a web page with the information in it. What is the simplest and easiest scripting language to do this in? Any ideas? Thanks

    Read the article

  • Windows Azure Horror Story: The Phantom App that Ate my Data

    - by jasont
    Originally posted on: http://geekswithblogs.net/jasont/archive/2013/10/31/windows-azure-horror-story-the-phantom-app-that-ate-my.aspxI’ve posted in the past about how my company, Stackify, makes some great tools that enhance the Windows Azure experience. By just using the Azure Management Portal, you don’t get a lot of insight into what is happening inside your instances, which are just Windows VMs. This morning, I noticed something quite scary, and fittingly enough, it’s Halloween. I deleted a deployment after doing a new deployment and VIP swap. But, after a few minutes, the instance still appeared in my Stackify dashboard. I thought this odd, as we monitor these operations and remove devices that have been deleted. Digging in showed that it wasn’t a problem with Stackify. This instance was still there, and still sending data. You’ll notice though that the “Status” check shows that the instance no longer exists. And it gets scarier. I quickly checked the running services, and sure enough, the Windows Azure Guest Agent (which runs worker roles) was still running: “Surely, this can’t be” I’m thinking at this point. Knowing that this worker role is in our QA environment, it has debug logging turned on. And, using our log file tailing feature, I can see that, yes, indeed, this deleted deployment is still executing code. Pulling messages off queues. Sending alerts. Changing data. Fortunately, our tools give me the ability to manually disable the Windows service that runs this code. In the meantime, I have contacted everyone I know (and support) at Microsoft to address this issue. Needless to say, this is concerning. When you delete a deployment, it needs to actually delete. Not continue to eat your data. As soon as I learn more, I’ll be posting an update.

    Read the article

  • Graph data structures and journal format for mini-IDE

    - by matec
    Background: I am writing a small/partial IDE. Code is internally converted/parsed into a graph data structure (for fast navigation, syntax-check etc). Functionality to undo/redo (also between sessions) and restoring from crash is implemented by writing to and reading from journal. The journal records modifications to the graph (not to the source). Question: I am hoping for advice on a decision on data-structures and journal format. For the graph I see two possible versions: g-a Graph edges are implemented in the way that one node stores references to other nodes via memory address g-b Every node has an ID. There is an ID-to-memory-address map. Graph uses IDs (instead of addresses) to connect nodes. Moving along an edge from one node to another each time requires lookup in ID-to-address map. And also for the journal: j-a There is a current node (like current working directory in a shell + file-system setting). The journal contains entries like "create new node and connect to current", "connect first child of current node" (relative IDs) j-b Journal uses absolute IDs, e.g. "delete edge 7 - 5", "delete node 5" I could e.g. combine g-a with j-a or combine g-b with j-b. In principle also g-b and j-a should be possible. [My first/original attempt was g-a and a version of j-b that uses addresses, but that turned out to cause severe restrictions: nodes cannot change their addresses (or journal would have to keep track of it), and using journal between two sessions is a mess (or even impossible)] I wonder if variant a or variant b or a combination would be a good idea, what advantages and disadvantages they would have and especially if some variant might be causing troubles later.

    Read the article

  • Setting up a shared media drive

    - by Sam Brightman
    I want to have a shared media drive be transparently usable to all users, whilst also sticking to FHS and Ubuntu standards. The former takes priority if necessary. I currently mount it at /media/Stuff but /media is supposed to be for external media, I believe. The main issue is setting permissions so that access to read and write to the drive can be granted to multiple users working within the same directories. InstallingANewHardDrive seems both slightly confused and not what I want. It claims that this sets ownership for the top-level directory (despite the recursion flag): sudo chown -R USERNAME:USERNAME /media/mynewdrive And that this will let multiple users create files and sub-directories but only delete their own: sudo chgrp plugdev /media/mynewdrive sudo chmod g+w /media/mynewdrive sudo chmod +t /media/mynewdrive However, the group writeable bit does not seem to get inherited, which is troublesome for keeping things organised (prevents creation inside sub-folders originally made by another user). The sticky bit is probably also unwanted for the same reason, although currently it seems that one userA (perhaps the owner of the mount-point?) can delete the userB's files, but not vice-versa. This is fine, as long as userB can create files inside the directory of userA. So: What is the correct mount point? Is plugdev the correct group? Most importantly, how to set up permissions to maintain an organised media drive? I do not want to be running cron jobs to set permissions regularly!

    Read the article

  • Question about a simple design problem

    - by Uri
    At work I stumbled uppon a method. It made a query, and returned a String based on the result of the query, such as de ID of a customer. If the query didn't return a single customer, it'd return a null. Otherwise, it'd return a String with the ID's of them. It looked like this: String error = getOwners(); if (error != null) { throw new Exception("Can't delete, the flat is owned by: " + error); } ... Ignoring the fact that getCustomers() returns a null when it should instead return an empty String, two things are happening here. It checks if the flat is owned by someone, and then returns them. I think a more readable logic would be to do this: if (isOwned) { throw new Exception("Can't delete, the flat is owned by: " + getOwners()); } ... The problem is that the first way does with one query what I do with two queries to the database. What would be a good solution involving good design and efficiency for this?

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >