Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 299/465 | < Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >

  • Free NOSQL database for use with C# client [closed]

    - by Mitten
    I've never used NOSQL databases before, but so far it seems like the best data storage solution for my project. I am going to implement a datamining application. The data I would like to mine is thousands of documents which cannot be imported into datamining applications. To make to import easier and faster (than importing thousands of documents) I am planning to import these documents into a NOSQL database first and when import NOSQL database into datamining software. At the very least once I have all the data in NOSQL database I should be able to code simplest datamining logic myself. Am I correct that NOSQL databases allow to creates records of data, but they don't mandate all the records to adhere to the same data schema (same column names/types in a classic table oriended SQL databases)? I think for each document I would create a row/entry/object (not sure what is the correct term is in use in NOSQL world) which would be a string id, few (columns) with unstructured text data, and a dozens of columns mostly of datetime and integer types. From its name NOSQL does not support SQL query syntax, but it support locating the object(row/entry?) by its unique id. Does NOSQL support qyuering objects using property=value syntax? Unfortunately most of free NOSQL db only support Java/C++ clients, which free NOSQL db would you recommend for a C# programmer?

    Read the article

  • EPM 11.1.2.2 Architecture: Essbase

    - by Marc Schumacher
    Since a lot of components exist to access or administer Essbase, there are also a couple of client tools available. End users typically use the Excel Add-In or SmartView nowadays. While the Excel Add-In talks to the Essbase server directly using various ports, SmartView connects to Essbase through Provider Services using HTTP protocol. The ability to communicate using a single port is one of the major advantages from SmartView over Excel Add-In. If you consider using Excel Add-In going forward, please make sure you are aware of the Statement of Direction for this component. The Administration Services Console, Integration Services Console and Essbase Studio are clients, which are mainly used by Essbase administrators or application designers. While Integration Services and Essbase Studio are used to setup Essbase applications by loading metadata or simply for data loads, Administration Services are utilized for all kind of Essbase administration. All clients are using only one or two ports to talk to their server counterparts, which makes them work through firewalls easily. Although clients for Provider Services (SmartView) and Administration Services (Administration Services Console) are only using a single port to communicate to their backend services, the backend services itself need the Essbase configured port range to talk to the Essbase server. Any communication to repository databases is done using JDBC connections. Essbase Studio and Integration Services are using different technologies to talk to the Essbase server, Integration Services uses CAPI, Essbase Studio uses JAPI. However, both are using the configured port range on the Essbase server to talk to Essbase. Connections to data sources are either based on ODBC (Integration Service, Essbase) or JDBC (Essbase Studio). As for all other components discussed previously, when setting up firewall rules, be aware of the fact that all services may need to talk to the external authentication sources, this is not only needed for Shared Services.

    Read the article

  • Tömörítés becslése - Compression Advisor

    - by lsarecz
    Az Oracle Database 11g verziójától már OLTP adatbázisok is hatékonyan tömöríthetok az Advanced Compression funkcióval. Nem csak a tárolandó adatok mennyisége csökken ezáltal felére, vagy akár negyedére, de az adatbázis teljesítménye is javulhat, amennyiben I/O korlátos a rendszer (és általában az). Hogy pontosan mekkora tömörítés várható az Advanced Compression bevezetésével, az kiválóan becsülheto a Compression Advisor eszközzel. Ez nem csak az OLTP tömörítés mértékét, de 11gR2 verziótól kezdve a HCC tömörítés arányát is becsülni tudja, amely Exadata Database Machine, Pillar Axiom illetve ZFS Storage alkalmazásával érheto el. A HCC tömörítés becsléséhez csak 11gR2 adatbázisra van szükség, nem kell hozzá a speciális célhardver (Exadata, Pillar, ZFS). A Compression Advisor valójában a DBMS_COMPRESSION package használatával érheto el. A package-hez tartozik 6 konstans, amellyel a kívánt tömörítési szintek választhatók ki: Constant Type Value Description COMP_NOCOMPRESS NUMBER 1 No compression COMP_FOR_OLTP NUMBER 2 OLTP compression COMP_FOR_QUERY_HIGH NUMBER 4 High compression level for query operations COMP_FOR_QUERY_LOW NUMBER 8 Low compression level for query operations COMP_FOR_ARCHIVE_HIGH NUMBER 16 High compression level for archive operations COMP_FOR_ARCHIVE_LOW NUMBER 32 Low compression level for archive operations A GET_COMPRESSION_RATIO tárolt eljárás elemzi a tömöríteni kívánt táblát. Mindig csak egy táblát, vagy opcionálisan annak egy partícióját tudja elemezni úgy, hogy a tábláról készít egy másolatot egy külön erre a célra kijelölt/létrehozott táblatérre. Amennyiben az elemzést egyszerre több tömörítési szintre futtatjuk, úgy a tábláról annyi másolatot készít. A jó közelítésu becslés (+-5%) feltétele, hogy táblánként/partíciónként minimum 1 millió sor legyen. 11gR1 esetében még a DBMS_COMP_ADVISOR csomag GET_RATIO eljárása volt használatos, de ez még nem támogatta a HCC becslést. Érdemes még megnézni és kipróbálni a Tyler Muth blogjában publikált formázó eszközt, amivel a compression advisor kimenete alakítható jól értelmezheto formátumúvá. Végül összegezném mit is tartalmaz az Advanced Compression opció, mivel gyakran nem világos a felhasználóknak miért kell fizetni: Data Guard Network Compression Data Pump Compression (COMPRESSION=METADATA_ONLY does not require the Advanced Compression option) Multiple RMAN Compression Levels (RMAN DEFAULT COMPRESS does not require the Advanced Compression option) OLTP Table Compression SecureFiles Compression and Deduplication Ez alapján RMAN esetében például a default compression (BZIP2) szint ingyen használható, viszont az új ZLIB Advanced Compression opciót igényel. A ZLIB hatékonyabban használja a CPU-t, azaz jóval gyorsabb, viszont kisebb tömörítési arány érheto el vele.

    Read the article

  • Knowledge Transfer without a Plan

    - by Kanini
    Hello...We are doing work for a particular client managing their CRM implementation. (The CRM itself is a product which has been largely customized to suit my client's needs). Now, they want us to manage the Oracle batch jobs/ETL as well. And for this, they are ready to provide us with Knowledge Transfer. (The Oracle batch jobs/ETL is managed in-house by the client now). After much persuasion, I got one of the Project Lead (designation-wise) to email the client asking for a KT Plan. (The Project Lead kept saying that they have never had KT plans before and all that for which I offered I will draft a template and even that was rejected!). Email from us to them - Can you please share with us the KT Plan? Response from them - Not sure what is expected from my side? The KT is planned for tomorrow from 11 am onwards where Functional knowledge of existing ETL Data migration package will be shared. How do you handle such a client? Most likely what is going to happen is this. The person who is giving the KT will say that I have given complete Knowledge Transfer and we will go back and say that "No, this was not covered. For this, they provided an overview alone and left it at that!" and so on... My Project Lead also did not respond to that email. He just said that the meeting is scheduled to happen at 11 AM (basically repeating whatever the email said and left for the day!). What could I possibly do? PS: Look for another job is a very helpful answer, but I am not looking for it. :-)

    Read the article

  • Apache redirecting: reason unknown

    - by Sinan
    I have a simple php script. The script is not important. It just prints out $_SERVER. When I request an URL like www.server.com/?ref=bar everything is fine. However if the request contains something like www.server.com/?ref=http://www.test.com (?ref=http%3A%2F%2Fwww.test.com) the server redirects to 403.shtml. No redirect for http://x but redirects http://x.y As far as I can understand somehow the server doesn't like "http://x". It always redirects to 403.shtml when there is a valid query string in the form of a valid url. my .htacess file is the same both on my server and local test server and local test server behaves as expected (no redirects). So I don't it is related to .htaccess. I'm on shared host on Hostgator. Can anyone help? Edit: Here's the .htaccess file Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?$1 [L,QSA] When there is an http://xx.x it redirects to 403 even if there is physical file. However if I remove the .htaccess redirect to 403 also disappears. But I need the above .htaccess file. Is there a way to get around this?

    Read the article

  • Does a team of developers need a manager?

    - by Amadiere
    Background: I'm currently part of a team of four: 1 manager, 1 senior developer and 2 developers. We do a range of bespoke in-house systems / projects (e.g. 6-8 weeks) for an organisation of around 3500 staff, as well as all the maintenance and support required from the systems that have been created before. There is not enough of us to do all the work that is potentially coming our way - we're understaffed. Management acknowledge this, but budget restraints limit our ability to recruit additional members to the team (even if we make the salary back in savings). The Change This leaves us where we are now. Our manager is due to leave his role for pastures new, leaving a vacancy in the team. Management are using this opportunity to restructure our team which would see the team manager role replaced by another developer and another senior developer. Their logic being that we need more developers, so here's a way of funding it (one of the roles is partially funded from another vacant post). The team would have no direct line manager and the roles and responsibilities would be divided up between the seniors and the (relatively new to post) service manager (a non-technical role with little-to-no development knowledge/experience whose focus is shared amongst a number of other teams and individuals) - who would be our next actual manager up the food chain. I guess the final question is: Is it possible to run a development team without an manager? Have you had experience of this? And what things could go wrong / could be of benefit to us? I'd ideally like to "see the light" and the benefits of doing things this way, or come up with some points for argument against it.

    Read the article

  • Craftsmanship Tour Day 1: Didit Long Island

    - by Liam McLennan
    On Monday I was at Didit for my first ever craftsmanship visit. Didit seem to occupy a good part of a non-descript building in Rockville Centre Long Island. Since I had arrived early from Seattle I had some time to kill, so I stopped at the Rockville Diner on the corner of N Park Ave and Sunrise Hwy. I thoroughly enjoyed the pancakes and the friendly service. After walking to the Didit office I met Rik Dryfoos, the Didit Engineering Manager who organised my visit, and got the introduction to Didit and the work they are doing. I spent the morning in the room shared by the Didit developers, who are working on some fascinating deep engineering problems. After lunch at a local Thai place I setup a webcam to record an interview with Rik and Matt Roman (Didit VP of Engineering). I had a lot of trouble with the webcam, including losing several minutes of conversation, but in the end I was very happy the result. Here are the full interviews with Rik and Matt: Interview with Rik Dryfoos Interview with Matt Roman We had a great chat, much of which is captured in the recording. It was such great conversation that I almost missed my train to Manhattan. I’m sure Didit will continue to do well with such a dedicated and enthusiastic team. I sincerely thank them for hosting me for the day. If you are looking for a true agile environment and the opportunity to work with a high quality team then you should talk to Didit.

    Read the article

  • CodePlex Daily Summary for Saturday, November 27, 2010

    CodePlex Daily Summary for Saturday, November 27, 2010Popular ReleasesMahTweets for Windows Phone: Nightly 69: Latest nightly for build 69XamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery-WPF v1.2 (Runtime, Source): This is the first release of popular XamlQuery library for WPF. XamlQuery has already gained recognition among Silverlight developers.Math.NET Numerics: Beta 1: First beta of Math.NET Numerics. Only contains the managed linear algebra provider. Beta 2 will include the native linear algebra providers along with better documentation and examples.WatchersNET.SiteMap: WatchersNET.SiteMap 01.03.02: Whats NewNew Tax Filter, You can now select which Terms you want to Use.Minecraft GPS: Minecraft GPS 1.1: 1.1 Release New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-11-25: Code samples for Visual Studio 2010Typps (formerly jiffycms) wysiwyg rich text HTML editor for ASP.NET AJAX: Typps 2.9: -When uploading files (not images), through the file uploader and the multi-file uploader, FileUploaded and MultiFileUploaded event handlers were reporting an empty event argument, this is fixed now. -Fixed also url field not updating when uploading a file ( not image)Wii Backup Fusion: Wii Backup Fusion 0.8.5 Beta: - WBFS repair (default) options fixed - Transfer to image fixed - Settings ui widget names fixed - Some little bug fixes You need to reset the settings! Delete WiiBaFu's config file or registry entries on windows: Linux: ~/.config/WiiBaFu/wiibafu.conf Windows: HKEY_CURRENT_USER\Software\WiiBaFu\wiibafu Mac OS X: ~/Library/Preferences/com.wiibafu.wiibafu.plist Caution: This is a BETA version! Errors, crashes and data loss not impossible! Use in test environments only, not on productive syste...Minemapper: Minemapper v0.1.3: Added process count and world size calculation progress to the status bar. Added View->'Status Bar' menu item to show/hide the status bar. Status bar is automatically shown when loading a world. Added a prompt, when loading a world, to use or clear cached images.SQL Monitor: SQL Monitor 1.4: 1.added automatically load sql server instances 2.added friendly wait cursor 3.fixed problem with 4.0 fx 4.added exception handlingSexy Select: sexy select v0.4: Changes in v0.4 Added method : elements. This returns all the option elements that are currently added to the select list Added method : selectOption. This method accepts two values, the element to be modified and the selected state. (true/false)Deep Zoom for WPF: First Release: This first release of the Deep Zoom control has the same source code, binaries and demos as the CodeProject article (http://www.codeproject.com/KB/WPF/DeepZoom.aspx).Simple Service Locator: Simple Service Locator v0.12: The Simple Service Locator is an easy-to-use Inversion of Control library that is a complete implementation of the Common Service Locator interface. It solely supports code-based configuration and is an ideal starting point for developers unfamiliar with larger IoC / DI libraries New features in this release Collections that are registered using RegisterAll<T> can now be injected using automatic constructor injection. A new RegisterAll<T>(params T[]) method overload is added that allows ea...BlogEngine.NET: BlogEngine.NET 2.0 RC: This is a Release Candidate version for BlogEngine.NET 2.0. The most current, stable version of BlogEngine.NET is version 1.6. Find out more about the BlogEngine.NET 2.0 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation and the installation screencast. If you are upgrading from a previous version, please take a look at the Upgrading to BlogEngine.NET 2.0 instructions. As this ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.156: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds a feature for aggregating the overall metrics in a folder full of NodeXL workbooks, adds geographical coordinates to the Twitter import features, and fixes a memory-related bug. See the Complete NodeXL Release History for details. Please Note: There is a new option in the setup program to install for "Just Me" or "Everyone." Most people...VFPX: FoxBarcode v.0.11: FoxBarcode v.0.11 - Released 2010.11.22 FoxBarcode is a 100% Visual FoxPro class that provides a tool for generating images with different bar code symbologies to be used in VFP forms and reports, or exported to other applications. Its use and distribution is free for all Visual FoxPro Community. Whats is new? Added a third parameter to the BarcodeImage() method Fixed some minor bugs History FoxBarcode v.0.10 - Released 2010.11.19 - 85 Downloads Project page: FoxBarcodeDotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 1.1.0.5: What is new in DotNetAge 1.1.0.5 ?Document Library features and template added. Resolve issues of templates Improving publishing service performance Opml support added. What is new in DotNetAge 1.1 ? D.N.A Core updatesImprove runtime performance , more stabilize. The DNA core objects model added. Personalization features added that allows users create the personal website, manage their resources, store personal data DynamicUIFixed the PageManager could not move page node bug. ...ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.3.1 and demos: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form and Pager tested on mozilla, safari, chrome, opera, ie 9b/8/7/6MDownloader: MDownloader-0.15.24.6966: Fixed Updater; Fixed minor bugs;WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.1: Version: 2.0.0.1 (Milestone 1): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...New ProjectsCommunity Megaphone for Windows Phone: The Official CommunityMegaphone application for Windows Phone. Releases are in XAP format, allowing you to upload the compiled application into a device using the Windows Phone Application Deployment tool. Download the source code to learn how to build Windows Phone applications.Group positioning GPS: this is a project for a subject in the university geographic information systems, its about a group of gadjets that can know the position of the other devices in the same group.IIS HTTPS Binder: On IIS 7 that you can not create more than one HTTPS binding, even though you have more then one SSL certificate and you need HTTPS binding on different hosted websites. If you have one IP address you can bind only one SSL to chosen website. This small application can fix this.Inspired Faith Now Playing: A ClickOnce-deployed desktop application that sits in your application tray and notifies you when any of your favorite Messianic Jewish and Christian artists or songs play on the Inspired Faith online radio station (http://inspiredfaith.org)MA Manager: The goal of this software is to help manage a martial arts school through multiple clients and platforms. This allows instructors to easily track students progress overtime.MathModels: This project contains commonly used algorithm implementations in C# .netMyFlickr API: MyFlickr API is a library for developers allows them to call Flickr API from any .Net Application.NerdOnRails.DynamicProxy: a small dynamic-proxy implementation using the new dynamic object that was introduced with .Net 4.NerdOnRails.Injector: a small dependency injection framework for .netNHyperV: NHyperV is a C# Hyper-V programming model.Power Scheme Switcher: This is a very simple utility that exposes an icon in the system tray and allows you to quickly change the Power Plan Scheme from there. It is developed in c# with Vs 2010(WinForm)rcforms: Custom sharepoint forms exampleRobo Commander: This is a Lego NXT commander developed under Visual Studio. This console will have basic commands and uses the NXT++ libraries. Shared Genomics Project - Workbench Codebase: The Shared Genomics workbench enables a diverse user group of researchers to explore the associations between genetic and other factors in their datasets. It provides a graphical user interface to the analysis functions published in a sister Codeplex project i.e. MPI Codebase.SharePoint 2010 PowerShell v2 reference: This project hosts a complete PowerShell v2 reference for SharePoint 2010.Social PowerFlow: Experiments with PowerWF and the social network - integrating Windows Powershell and Windows Workflow Foundation with Facebook and TwitterTweetSayings: Twitter SayingsXamlQuery/WPF - The Write Less, Do More, WPF Library: XamlQuery/WPF is a lightweight yet powerful library that enables rapid development in WPF. It simplifies several tasks like page/window traversing; finding controls by name, type, style, property value or position in control tree; event handling; animating and much more.XmlDSigEx XML Digital Signature Library: The XmlDSigEx library is an alternative to using the SignedXml classes in the .Net Framework. It addresses some of the shortcomings of the standard types particularly in regards to canonicalisation and the enveloped transform. It is currently under development.Yoi's Online Project: This is online storage for Yovi's project?????? ???? ???: ?????? ???? ?? ????? ??? ?????? ? ??????? ?? ???? ??? ????

    Read the article

  • Getting Started with TypeScript – Classes, Static Types and Interfaces

    - by dwahlin
    I had the opportunity to speak on different JavaScript topics at DevConnections in Las Vegas this fall and heard a lot of interesting comments about JavaScript as I talked with people. The most frequent comment I heard from people was, “I guess it’s time to start learning JavaScript”. Yep – if you don’t already know JavaScript then it’s time to learn it. As HTML5 becomes more and more popular the amount of JavaScript code written will definitely increase. After all, many of the HTML5 features available in browsers have little to do with “tags” and more to do with JavaScript (web workers, web sockets, canvas, local storage, etc.). As the amount of JavaScript code being used in applications increases, it’s more important than ever to structure the code in a way that’s maintainable and easy to debug. While JavaScript patterns can certainly be used (check out my previous posts on the subject or my course on Pluralsight.com), several alternatives have come onto the scene such as CoffeeScript, Dart and TypeScript. In this post I’ll describe some of the features TypeScript offers and the benefits that they can potentially offer enterprise-scale JavaScript applications. It’s important to note that while TypeScript has several great features, it’s definitely not for everyone or every project especially given how new it is. The goal of this post isn’t to convince you to use TypeScript instead of standard JavaScript….I’m a big fan of JavaScript. Instead, I’ll present several TypeScript features and let you make the decision as to whether TypeScript is a good fit for your applications. TypeScript Overview Here’s the official definition of TypeScript from the http://typescriptlang.org site: “TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.” TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool or by using a variety of editors/tools. TypeScript offers several key features. First, it provides built-in type support meaning that you define variables and function parameters as being “string”, “number”, “bool”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Visual Studio, Sublime Text, Emacs, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at http://www.typescriptlang.org/#Download. TypeScript can also be used with existing JavaScript frameworks such as Node.js, jQuery, and others and even catch type issues and provide enhanced code help. Special “declaration” files that have a d.ts extension are available for Node.js, jQuery, and other libraries out-of-the-box. Visit http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#samples%2fjquery%2fjquery.d.ts for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files certainly aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as jQuery. In the future I expect TypeScript declaration files will be released for different HTML5 APIs such as canvas, local storage, and others as well as some of the more popular JavaScript libraries and frameworks. Getting Started with TypeScript To get started learning TypeScript visit the TypeScript Playground available at http://www.typescriptlang.org. Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:   One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next: class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } Looking through the code you’ll notice that static types can be defined on variables and parameters such as greeting: string, that constructors can be defined, and that functions can be defined such as greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, bool, undefined, and null as well as object literals and more complex types such as HTMLInputElement (for an <input> tag). Custom types can be defined as well. The JavaScript output by compiling the TypeScript Greeter class (using an editor like Visual Studio, Sublime Text, or the tsc.exe compiler) is shown next: var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })(); Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions. In cases where you’d like to wrap a class in a naming container (similar to a namespace in C# or a package in Java) you can use TypeScript’s module keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.   module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world"); In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s extends keyword. The following code shows an example of using inheritance to define two report objects:   class Report { name: string; constructor (name: string) { this.name = name; } print() { alert("Report: " + this.name); } } class FinanceReport extends Report { constructor (name: string) { super(name); } print() { alert("Finance Report: " + this.name); } getLineItems() { alert("5 line items"); } } var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();   In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s extends keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the super() call. TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. The following code shows an example of an interface named Thing (from the TypeScript samples) and a class named Plane that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.   interface Thing { intersect: (ray: Ray) => Intersection; normal: (pos: Vector) => Vector; surface: Surface; } class Plane implements Thing { normal: (pos: Vector) =>Vector; intersect: (ray: Ray) =>Intersection; constructor (norm: Vector, offset: number, public surface: Surface) { this.normal = function (pos: Vector) { return norm; } this.intersect = function (ray: Ray): Intersection { var denom = Vector.dot(norm, ray.dir); if (denom > 0) { return null; } else { var dist = (Vector.dot(norm, ray.start) + offset) / (-denom); return { thing: this, ray: ray, dist: dist }; } } } }   At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with normal and intersect. TypeScript has additional language features but defining static types and creating classes, modules, and interfaces are some of the key features it offers. So is TypeScript right for you and your applications? That’s a not a question that I or anyone else can answer for you. You’ll need to give it a spin to see what you think. In future posts I’ll discuss additional details about TypeScript and how it can be used with enterprise-scale JavaScript applications. In the meantime, I’m in the process of working with John Papa on a new Typescript course for Pluralsight that we hope to have out in December of 2012.

    Read the article

  • .NET access to the GPU for compute purposes

    - by Daniel Moth
    In the distant past I talked about GPGPU and Microsoft's then approach of DirectCompute. Since then of course we now have C++ AMP coming out with Visual Studio 11, so there is a mainstream easier way for developers to access the GPU for compute purposes, using C++. The question occasionally arises of how can a .NET developer access the GPU for compute purposes from their C# (or VB) code. The answer is by interoping from the managed code to a native DLL and in the native DLL use C++ AMP. As a long term .NET developer myself, I can tell you this is straightforward. Sure, there could have been a managed wrapper for C++ AMP, but honestly that is the reason we have interop – it doesn't make much sense to invest resources to solve a problem that is already solved (most developer customers would prefer investments in other areas of Visual Studio!). Besides, interoping from C# to C++ is much easier than interoping to some of the other older approaches of GPGPU programming ;-) To help you get started with the interop approach, Igor Ostrovsky has previously shared the "Hello World" version of interoping from C# to C++ AMP in his blog post: How to use C++ AMP from C# …we then were asked specifically about how to interop from C# to C++ AMP in a Metro style application on Windows 8, so Igor delivered again with this post: How to use C++ AMP from C# using WinRT Have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is there a modified LGPL license that allows static linking?

    - by Petr Pudlák
    úLGPL requires that it if a program uses LGPL-ed library, users must be able to re-link the program with a different version of the library: ... d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. ... However in some cases, this can pose considerable difficulties. In particular, Haskell programs are almost always statically compiled. Moreover, the compiler does cross-module optimizations so it's very hard to satisfy this condition. (See this link at Haskell Wiki.) Therefore, I'm looking for a standard LGPL-like license that wouldn't require the possibility of re-linking. Some projects use their own modification of LGPL, for example wxWidgets. But I'd rather use some standard license that is somewhat more official, perhaps checked by some law experts, and (L)GPL compatible. Is there some like that? (Also I'd be interested to know if are there some unforeseen consequences of such a modification of LGPL.)

    Read the article

  • BPM PS6 video showing process lifecycle in more detail (30min) by Mark Nelson

    - by JuergenKress
    If the five minute video I shared last week has whet your appetite for more, then this might be just what you are looking for! The same international team that has made that video - Andrew Dorman, Tanya Williams, Carlos Casares, Joakim Suarez and James Calise – have also created a thirty minute version that walks through in much more detail and shows you, from the perspective of various business stakeholders involved in process modeling, exactly how BPM PS6 supports the end to end process lifecycle. The video centres around a Retail Leasing use case, and follows how Joakim the Business Analyst, Pablo the Process Owner, and James the Process Analyst take the process from conception to runtime, solely through BPM Composer, without the need for IT or the use of JDeveloper. Joakim, the Business Analyst, models the process, designs the user interaction forms, and creates business rules, Pablo, the Process Owner, reviews the process documentation and tests the process using the new ‘Process Player’, James, the Process Analyst, analyses the process and identifies potential bottle necks using ‘Process Simulation’. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM PS6,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Today's Links (6/21/2011)

    - by Bob Rhubart
    Keeping your process clean: Hiding technology complexity behind a service | Izaak de Hullu Izaak de Hullu offers a solution to "technology pollution like exception handling, technology adapters and correlation." WebLogic Weekly for June 20th, 2011 | James Bayer James Bayer presents "a round-up what has been going on in WebLogic over the past week." Publish to EDN from Java & OSB with JMS | Edwin Biemond Busy blogger and Oracle ACE Edwin Biemond shows "how you can publish events from Java and OSB." How is HTML 5 changing web development? | Audrey Watters - O'Reilly Radar In this interview, OSCON speaker Remy Sharp discusses HTML5's current usage and how it could influence the future of web apps and browsers. SOA Governance Book | SOA Partner Community Blog Information on how those in EMEA can win a free copy of SOA Governance: Governing Shared Services On-Premise and in the Cloud by Thomas Erl, et al. Keeping The Faith on 11i | Floyd Teter "The iceberg is melting, the curtain is coming down, the lights are dimming, the fat lady is singing," says Oracle ACE Director Floyd Teter. Configure and test JMS based EDN in SOA Suite 11g | Edwin Biemond Oracle ACE Edwin Biemond shows you "how to configure EDN-JMS and how to publish an Event to this JMS Queue." Choosing the best way for SOA Suite and Oracle Service Bus to interact with the Oracle Database | Lucas Jellema Oracle ACE Director Lucas Jellema illustrates "over 20 different interaction channels" covering "a fairly wild variation of attributes, required skills, productivity and performance characteristics." Oracle Data Integrator 11.1.1.5 Complex Files as Sources and Targets | Alex Kotopoulis ODI 11.1.1.5 adds the new Complex File technology for use with file sources and targets. The goal is to read or write file structures that are too complex to be parsed using the existing ODI File technology. Java Spotlight Podcast Episode 35: JVM Performance and Quality Featuring an interview with Vladimir Ivanov, Ivan Krylov, and Sergey Kuksenko on the JDK 7 Java Virtual Machine performance and quality. Also includes the Java All Star Developer Panel featuring Dalibor Topic, Java Free and Open Source Software Ambassador, and Alexis Moussine-Pouchkine, Java EE Developer Advocate.

    Read the article

  • Latest News on Service, Field Service and Depot Repair Products

    - by LuciaC
    Service and Depot Repair Customer Advisory Boards (CAB) In November 2012 the Service and Depot Repair CAB joined together for a combined meeting at Oracle HQ in Redwood Shores, California to discuss all the latest news in the Oracle Service, Field Service and Depot Repair products.  Over four days attendees shared their experiences with implementing and using these EBS CRM products and heard details of recent enhancements and future product plans direct from Development. You can access all the Oracle presentations via Doc ID 1511768.1.  Here are just some of the highlights: Field Service: Next Generation Dispatch Center Endeca Integration Case Study: Oracle Sun Field Service implementation. Mobile Field Service: New capabilities for technician-facing applications Service: Integration with Oracle Projects New Teleservice enhancements Spares Management: Supplier Warranty External Repair Execution Oracle Knowledge (Inquira) Introduction for Service Organizations If you weren't at the CAB, take a look at these presentations for great information about what's new and what's coming up in these products. 12.1.3++ Features for Field Service, Mobile Field Service, Spares Management, FSTP & Advanced Scheduler In June 2012 the R12.1.3++ patches were released for Field Service, Mobile Field Service, FSTP and Advanced Scheduler.  These patches contain new and updated functionality for these CRM Service suite modules.  New functionality includes: Field Service/FSTP/MFS: Support for Transfer Parts across subinventories in different organizations Validation to ensure Installed Item matches Returned Item MFS Wireless - Support fro Special Address Creation MFS Wireless - Enhanced Debrief Flow Advanced Scheduler Scheduler UI - Display of Spares Sourcing Information Auto Commit (Release) Tasks by Territory Dispatch Center UI - Display Spare Parts Arrival Information Spares Management Enhancements to the Task Reassignment Process Enhancements to the Parts Requirements UI Supply Chain Enhancements to allow filtering of ship methods from source location by distance. Check the following notes for more details and relevant patch numbers:Doc ID 1463333.1 - Oracle Field Service Release Notes, Release 12.1.3++Doc ID 1452470.1 - Field Service Technician Portal 12.1.3++ New FeaturesDoc ID 1463066.1 - Oracle Advanced Scheduler Release Notes, Release 12.1.3++ Doc ID 1463335.1 - Oracle Spares Management Release Notes, Release 12.1.3++ Doc ID 1463243.1 - Oracle Mobile Field Service Release Notes, Release 12.1.3++

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

  • How can we make agile enjoyable for developers that like to personally, independently own large chunks from start to finish

    - by Kris
    We’re roughly midway through our transition from waterfall to agile using scrum; we’ve changed from large teams in technology/discipline silos to smaller cross-functional teams. As expected, the change to agile doesn’t suit everyone. There are a handful of developers that are having a difficult time adjusting to agile. I really want to keep them engaged and challenged, and ultimately enjoying coming to work each day. These are smart, happy, motivated people that I respect on both a personal and a professional level. The basic issue is this: Some developers are primarily motivated by the joy of taking a piece of difficult work, thinking through a design, thinking through potential issues, then solving the problem piece by piece, with only minimal interaction with others, over an extended period of time. They generally complete work to a high level of quality and in a timely way; their work is maintainable and fits with the overall architecture. Transitioning to a cross-functional team that values interaction and shared responsibility for work, and delivery of working functionality within shorter intervals, the teams evolve such that the entire team knocks that difficult problem over. Many people find this to be a positive change; someone that loves to take a problem and own it independently from start to finish loses the opportunity for work like that. This is not an issue with people being open to change. Certainly we’ve seen a few people that don’t like change, but in the cases I’m concerned about, the individuals are good performers, genuinely open to change, they make an effort, they see how the rest of the team is changing and they want to fit in. It’s not a case of someone being difficult or obstructionist, or wanting to hoard the juiciest work. They just don’t find joy in work like they used to. I’m sure we can’t be the only place that hasn’t bumped up on this. How have others approached this? If you’re a developer that is motivated by personally owning a big chunk of work from end to end, and you’ve adjusted to a different way of working, what did it for you?

    Read the article

  • New York Coherence Special Interest Group, Jan 13 2011

    - by ruma.sanyal
    Please join us for our next exciting event. We are pleased to announce that Aleksander Seovic, Craig Blitz and Madhav Sathe will be presenting to our group. Presentation details are provided below. Time: 3:00pm - 6:00pm ET Where: Oracle Office, Room 30076, 520 Madison Avenue, 30th Floor, NY We will be providing snacks and beverages. Register! - Registration is required for building security. Presentations:? Getting the Most out of your Coherence Cluster with Oracle Enterprise Manager - Madhav Sathe, Principal Product Manager (Oracle) How To Build a Coherence Practice - Craig Blitz, Senior Principal Product Manager (Oracle) Congratulations on your decision to buy Oracle Coherence. We believe you have chosen an excellent product. Now the hard work begins. To help you get the most out of Coherence from both a project and enterprise perspective, this talk will introduce you to resources available from Oracle and through the Coherence ecosystem. The talk will also discuss best organizational practices you can implement to ensure success with Coherence. The speaker will use his significant experience with customers' Coherence deployment to show what works and what doesn't in practice. Coherence in the Cloud - Aleksandar Seovic, Founder and Managing Director (S4HC) Amazon Web Services cloud provides great and affordable foundation for the next generation of scalable web applications. Application servers, load balancers, and scalable storage can be provisioned in a matter of minutes and used for pennies an hour. However, AWS cloud also brings a set of new architectural challenges, such as transient file systems and dynamically assigned IP addresses. In this session we will look at a real-world example of how Coherence can be used to address some of these challenges and show why the combination of AWS cloud and Coherence has a great synergy.

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • The Mystery of the Vanishing Disk Space

    - by Oddthinking
    My disk space is dwindling by about 2GB a day! I only have a few more days before I run out of space. $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda4 143G 126G 11G 93% / udev 491M 4.0K 491M 1% /dev tmpfs 200M 696K 199M 1% /run none 5.0M 0 5.0M 0% /run/lock none 499M 144K 499M 1% /run/shm /dev/sda2 1.9G 580M 1.2G 33% /tmp /dev/sda1 92M 29M 58M 33% /boot I have been searching for the biggest directories/log files, deleting and compressing. But I am still losing the war. Finally, I realised I have a big misunderstanding: julian@server1:~$ sudo du -h / | tail -n 1 16G / All of my files in / only add up to 16 GB. That leaves 110 GB unaccounted for! Clearly I have a misunderstanding: I thought the '/dev/sda4' line represented all the files visible from '/'. What should I be reading to understand where the other storage has gone? More details: I have an Ubuntu 11.10 server, that was set-up by data-center staff. It is running my own code (which is fairly prolific with log files, but otherwise doesn't store much stuff on the drive) duplicity for backups (which tends to store a lot of signature files) various other standard services, like Apache, nagios, etc. They are very lightly used. It has been up for about 4 months without a reboot. I lied about the du output (simplified it for effect). It also complained about not being able to access GVFS and the du processes's own resources. I believe they are irrelevant: . du: cannot access `/home/julian/.gvfs': Permission denied du: cannot access `/proc/10841/task/10841/fd/4': No such file or directory du: cannot access `/proc/10841/task/10841/fdinfo/4': No such file or directory du: cannot access `/proc/10841/fd/4': No such file or directory du: cannot access `/proc/10841/fdinfo/4': No such file or directory

    Read the article

  • Adding a CMS to an existing Magento shop

    - by user6341
    I am working on a project for 3 niche stores built on magento (using magento's multi-store function) that each get roughly 50k unique visitors a day. The sites don't currently have a blog or forum or any social networking aspects. Would like to add a cms to each site that can be centrally run and would like it to take over the front end content from Magento. Also would like it to maintain an online blog/publication of sorts with videos, articles, and the like with privileges to edit the content given to a dozen or so people with different privileges. Want to add a forum to each site that is fairly robust and to possibly add some social networking aspects down the road, so extandability and available plugins/mods in each cms is important. Other than shared login between the forums,blog/publication and store, would like to be able to integrate some content from the forums and blog/publication into the store as well. After researching this a bit, I am inclined towards Drupal, but I haven't found any modules to integrate it with Magento. Also, since the blog content will be done by about a dozen nontechnical people, I want something that is very easy to work with. Lastly, since the site gets a good amount of traffic, speed and security are very important. What CMS would you recommend integrating in this context? Deciding between Drupal, Wordpress and ModX. Also Plone as well. Thanks.

    Read the article

  • different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • VGA monitor won't detect on NVIDIA Quadro NVS 4200M

    - by tanmayk
    I am on a Dell Latitude E6420 and just installed 12.04. When I plug in my external monitor's VGA cable, the laptop doesn't detect any new monitor. Here's the output of my lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:16.3 Serial controller: Intel Corporation 6 Series/C200 Series Chipset Family KT Controller (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation QM67 Express Chipset Family LPC Controller (rev 04) 00:1f.2 RAID bus controller: Intel Corporation 82801 Mobile SATA Controller [RAID mode] (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [Quadro NVS 4200M] (rev a1) 01:00.1 Audio device: NVIDIA Corporation HDMI Audio stub (rev a1) 03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6205 (rev 34) 0b:00.0 SD Host controller: O2 Micro, Inc. Device 8221 (rev 05) 0b:00.1 Mass storage controller: O2 Micro, Inc. Device 8231 (rev 03) I tried installing the Quadro NVS 4200M from the nvidia website but got stuck in TTY after the installation finished. Any ideas as to how I can get the external monitor to work? Thanks!

    Read the article

  • Functional programming and stateful algorithms

    - by bigstones
    I'm learning functional programming with Haskell. In the meantime I'm studying Automata theory and as the two seem to fit well together I'm writing a small library to play with automata. Here's the problem that made me ask the question. While studying a way to evaluate a state's reachability I got the idea that a simple recursive algorithm would be quite inefficient, because some paths might share some states and I might end up evaluating them more than once. For example, here, evaluating reachability of g from a, I'd have to exclude f both while checking the path through d and c: So my idea is that an algorithm working in parallel on many paths and updating a shared record of excluded states might be great, but that's too much for me. I've seen that in some simple recursion cases one can pass state as an argument, and that's what I have to do here, because I pass forward the list of states I've gone through to avoid loops. But is there a way to pass that list also backwards, like returning it in a tuple together with the boolean result of my canReach function? (although this feels a bit forced) Besides the validity of my example case, what other techniques are available to solve this kind of problems? I feel like these must be common enough that there have to be solutions like what happens with fold* or map. So far, reading learnyouahaskell.com I didn't find any, but consider I haven't touched monads yet. (if interested, I posted my code on codereview)

    Read the article

  • How to protect a peer-to-peer network from inappropriate content?

    - by Mike
    I’m developing a simple peer-to-peer app in .Net which should enable users to share specific content (text and picture files). As I've learned with my last question, inappropriate content can “relatively” easily be identified / controlled in a centralized environment. But what about a peer-to-peer network, what are the best methods to protect a decentralized system from unwanted (illegal) content? At the moment I only see the following two methods: A protocol (a set of rules) defines what kind of data (e.g. only .txt and jpg-files, not bigger than 20KB etc.) can be shared over the p2p-network and all clients (peers) must implement this protocol. If a peer doesn’t, it gets blocked by other peers. Pro: easy to implement. Con: It’s not possible to define the perfect protocol (I think eMail-Spam filters have the same problem) Some kind of rating/reputation system must be implemented (similar to stackoverflow), so “bad guys” and inappropriate content can be identified / blocked by other users. Pro: Would be very accurate. Con: Would be slow and in my view technically very hard to implement. Are there other/better solutions? Any answer or comment is highly appreciated.

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

< Previous Page | 295 296 297 298 299 300 301 302 303 304 305 306  | Next Page >