Search Results

Search found 10194 results on 408 pages for 'raw types'.

Page 227/408 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Integrating with Oracle Fusion Applications: Discovering Integration Artifacts

    - by Lionel Dubreuil
    Oracle Enterprise Repository serves as the core element to the Oracle SOA Governance solution. An industry-leading metadata repository, Oracle Enterprise Repository provides a solid foundation for delivering governance throughout the service-oriented architecture (SOA) lifecycle by acting as the single source of truth for information surrounding SOA assets and their dependencies. For Fusion Applications, the use of OER has been extended to include other integration asset types such as interface tables and other technical information such as data models, tables, views, lookups, profile options, et cetera. E-Business Suite users familiar with iRepository or eTRM will recognize the functionality in Fusion Applications OER. Oracle Enterprise Repository for Fusion Applications provides a common catalog of technical information, searchable using many different mechanisms. Customers can locate technical information by the name, description or keyword of the information they are looking for. They can also search by the type of asset they are trying to locate and/or where the asset sits in the product taxonomy. They can also see the how the asset dances in the choreography of some illustrative co-existence scenarios. These scenarios are laid out as both functional flow diagrams as well as technical interaction diagrams. Rajesh Raheja, software architect at Oracle, has recently posted an article on this topic: visibility and control are the key tenets to SOA governance, and the first step in integrating with Oracle Fusion Applications is to find out what are the integration options available. Oracle Enterprise Repository, an industry-leading metadata repository, provides this visibility. You can find his full blog post here.

    Read the article

  • CodePlex Daily Summary for Wednesday, October 16, 2013

    CodePlex Daily Summary for Wednesday, October 16, 2013Popular ReleasesHyper-V Management Pack Extensions 2012: HyperVMPE 2012 (1.0.1.206): RTM ReleaseFFXIV Crafting Simulator: Crafting Simulator 2.4: Added : - You can now drag&drop to reorganize the sequence. (Right click to remove now) - Fixed a bug with Ingenuity II not taken into consideration for quality Increase.C# Intellisense for Notepad++: Release v.1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.8.2: Solved scrolling problem after DocumentFormatting Implemented "format as you type" --- To avoid the DLLs getting locked by OS use MSI file for the installation.Collection Commander for Configuration Manager 2012: CMCollCtr 1.0.0: Change log: - MSI Setup - UI Improved - CM12 Console integration - New Powershell code snippets - Client Center IntegrationLINQ to Twitter: LINQ to Twitter v2.1.09: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...AD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Json.NET: Json.NET 5.0 Release 7: New feature - Added support for Immutable Collections New feature - Added WriteData and ReadData settings to DataExtensionAttribute New feature - Added reference and type name handling support to extension data New feature - Added default value and required support to constructor deserialization Change - Extension data is now written when serializing Fix - Added missing casts to JToken Fix - Fixed parsing large floating point numbers Fix - Fixed not parsing some ISO date ...RESX Manager: ResxManager 0.2.1: FIXED: Many critical bugs have been fixed. New Features Error logging for improved exception handling New toolbar Improvements of user interfaceFast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.TerrariViewer: TerrariViewer v7 [Terraria Inventory Editor]: This is a complete overhaul but has the same core style. I hope you enjoy it. This version is compatible with 1.2.0.3 Please send issues to my Twitter or https://github.com/TJChap2840WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...Free language translator and file converter: Free Language Translator 3.4: fixes for new version look up.MoreTerra (Terraria World Viewer): MoreTerra 1.11.3: =========== =New Features= =========== New Markers added for Plantera's Bulb, Heart Fruits and Gold Cache. Markers now correctly display for the gems found in rock debris on the floor. =========== =Compatibility= =========== Fixed header changes found in Terraria 1.0.3.1Media Companion: Media Companion MC3.581b: Fix in place for TVDB xml issue. New* Movie - General Preferences, allow saving of ignored 'The' or 'A' to end of movie title, stored in sorttitle field. * Movie - New Way for Cropping Posters. Fixed* Movie - Rename of folders/filename. caught error message. * Movie - Fixed Bug in Save Cropped image, only saving in Pre-Frodo format if Both model selected. * Movie - Fixed Cropped image didn't take zoomed ratio into effect. * Movie - Separated Folder Renaming and File Renaming fuctions durin...New ProjectsCDEasyUI: CDEasyUIEnough XamlConverter: A collection of useful XAML converters for Windows Phone and Windows 8 developers alike.GeReS: Geres is a simple batch job manager for Azure, written in Python for general applicability. Global Excel Automation Powershell Library: The Global Excel Automation PowerShell Library is a series of scripts to help with build deployment, application configuration, database copies and Hyper-V.jean1016jabbrchang: 11katrukTestProject: katruk test projectLocal to Global Option Set Converter: Automates the task of converting a Local Option Set into a Global Option Set in Microsoft Dynamics CRM 2011.Machine Cards: Machine Cards is a card playing game!Microsoft Translator Portable Wrapper: A portable wrapper for Microsoft Translator service. Can be used in various apps types. Desktop apps(.Net Framework 4.5), Windows Phone 8, Windows Store apps.Mod.DisplayTypes: Orchard module for a url that display content items with a certain display type. Multilingual Translator & Dictionary: The Multilingual Translator & Dictionary can translate and search meanings of words / phrases in multiple languages using Google Translator and Glosbe APIs.nDistribute: This is an attempt to build a library for synchronising data across a network of machines without the use of a predetermined central server.neurogoody: js sliceboxNotepadXX: NotepadXX is one of the requirements to complete in Open source. It is a open source text editor software.ODTK: Ein Toolkit für das Rollenspiel "Das Schwarze Auge (Ulisses Verlag)" um manche abläufe beim Spielen zu vereinfachen. Kampf Übersicht, Helden DBPhoto Frame and Door Cam: A Windows Service that hosts a simple digital photo frame web page that integrates with the Blue Iris NVR to show camera alerts when motion is detected.Powershell XML Deployment: While working as a Windows Server technology specialist in Sweden in the outsourcing branch, i've discovered that people have poor since of automation.PulseMonitor: this is pulse media projectRentACarRESTApi: Rent A Car REST ApiRubricaSentimentale: testScrutR - Monitor entities and notifiy when changes: ScrutR monitors entities of an application and sends notification when the conditions are matchedSRMongoDB: ????MongoDB C# ???。 ?????QueryBuilder.cs??。TP1_Quimica: uiuuuWake On LAN Gateway: A Client/Server solution for relaying WOL magic packets. Server runs as IIS module or Windows Service. Usage via REST service or installable windows client.Weather Forecast - Team Pixie - Telerik Academy 2012/2013: Simple weather forecast sharing website.Webapplication1: WebApplication1

    Read the article

  • Improve Customer Experience with Real-Time Scheduling

    - by ruth.donohue
    Recently, my husband rearranged his busy work schedule so that he could stay home an entire afternoon to wait for the alarm company to reset the password to our alarm system, only to discover at the end of the afternoon that the field service rep wasn’t going to be able to make the appointment after all. And, the company asked him to reschedule and block off time for another afternoon. Needless to say, my husband wasn’t happy with that experience. Unfortunately, customer experiences like this happen every day. As a business, you can’t afford these types of encounters. It’s too easy for your customers to turn to one of your competitors once they’ve reached the point of frustration. Customer experience and customer loyalty are more important than ever. So how can you prevent something like this from occurring? With the newly available Siebel Field Service Integration with Oracle Real-Time Scheduler, your service organization can: Create cost-optimized plans and schedules to improve operating efficiencies Deliver more accurate ETA’s and shorten appointment windows Minimize the impact of in-day events such as delays on site, sickness, poor weather conditions, and vehicle breakdowns Rather than requiring them to wait for an entire afternoon, imagine asking customers to be available for only an hour. And being able to commit to that time by working around unforeseen events and understanding the impact of delays or re-routings before they become customer issues. What would your customer experience and customer satisfaction be like then? Learn more about the Siebel Field Service Integration with Oracle Real-Time Scheduler: Register for and attend the upcoming webcast on Thursday, March 10th at 8:30 AM Pacific Time Read the press release, data sheet, and solution brief Visit the Siebel Field Service webpage

    Read the article

  • Efficient Trie implementation for unicode strings

    - by U Mad
    I have been looking for an efficient String trie implementation. Mostly I have found code like this: Referential implementation in Java (per wikipedia) I dislike these implementations for mostly two reasons: They support only 256 ASCII characters. I need to cover things like cyrillic. They are extremely memory inefficient. Each node contains an array of 256 references, which is 4096 bytes on a 64 bit machine in Java. Each of these nodes can have up to 256 subnodes with 4096 bytes of references each. So a full Trie for every ASCII 2 character string would require a bit over 1MB. Three character strings? 256MB just for arrays in nodes. And so on. Of course I don't intend to have all of 16 million three character strings in my Trie, so a lot of space is just wasted. Most of these arrays are just null references as their capacity far exceeds the actual number of inserted keys. And if I add unicode, the arrays get even larger (char has 64k values instead of 256 in Java). Is there any hope of making an efficient trie for strings? I have considered a couple of improvements over these types of implementations: Instead of using array of references, I could use an array of primitive integer type, which indexes into an array of references to nodes whose size is close to the number of actual nodes. I could break strings into 4 bit parts which would allow for node arrays of size 16 at the cost of a deeper tree.

    Read the article

  • Hotspotting - tying Visualization into Other applications

    - by warren.baird
    AutoVue 20 included our first step towards providing a rich hotspotting capability that will allow visualization capabilities to be very tightly integrated into a wide range of applications. The idea is to have a close link between the visual representation of an object or place, and the business objects associated with that object or place. We've been working with our partner Enigma to enable this capability in their parts catalogue - the screenshot above shows what it looks like - the image on the right is a trimmed down version of AutoVue displaying a drawing of the various parts in an interactive way - when you click on item '6' in the AutoVue drawing, the appropriate item is highlighted in the parts catalogue - making it easy to select the parts you need, and to ensure that the correct parts are selected. The integration works in both directions - when you select a part in the part catalogue, the appropriate part is highlighted in the drawing as well. To get slightly technical for a moment, this is a simple javascript integration - the external application provides a javascript callback that AutoVue calls whenever an item is clicked on, and AutoVue provides a javascript function to call when an item is selected in the external application. There are also direct java APIs available. This makes it easy to tie AutoVue into many types of applications - you can imagine in an asset lifecycle management application being able to click on the appropriate asset in a drawing to create a work-order, instead of finding the right asset ID to enter. Or being able to click on a part or sub-assembly to trigger a change order in a product lifecycle management application. We're pretty excited about the possibilities that this capability opens up, and plan on expanding on it a lot in the future. Would this be useful in your enterprise applications? What kinds of integrations like this would be useful for you? Let us know in the comments below!

    Read the article

  • Graduated transition from Green - Yellow - Red

    - by GoldBishop
    I have am having algorithm mental block in designing a way to transition from Green to Red, as smoothly as possible with a, potentially, unknown length of time to transition. For testing purposes, i will be using 300 as my model timespan but the methodology algorithm design needs to be flexible enough to account for larger or even smaller timespans. Figured using RGB would probably be the best to transition with, but open to other color creation types, assuming its native to .Net (VB/C#). Currently i have: t = 300 x = t/2 z = 0 low = Green (0, 255, 0) mid = Yellow (255, 255, 0) high = Red (255, 0, 0) Lastly, sort of an optional piece, is to account for the possibility of the low, mid, and high color's to be flexible as well. I assume that there would need to be a check to make sure that someone isnt putting in low = (255,0,0), mid=(254,0,0), and high=(253,0,0). Outside of this anomaly, which i will handle myself based on the best approach to evaluate a color. Question: What would be the best approach to do the transition from low to mid and then from mid to high? What would be some potential pitfalls of implementing this type of design, if any?

    Read the article

  • Would adding award points or game features to workplace software be viewed poorly amongst the programming community?

    - by Eric P
    So one of my responsibilities at work is to build an internal tool that helps the workers enter in all their information. It's an enterprise application that is similar to a Windows forms database tool. So it's not much different than like developing a Word + Excel combo application, but the average person in this workgroup is a 20-40 year old woman or a random chatty male type. Plus I know all of these people are heavily involved with Facebook on a daily basis. How bad would it be if I styled my new interface to be similar to what Facebook does. People could get award points and stuff when they fill out different types of forms and basically compete against each other like it was a game. When people had completed one, it would be posted on their wall and everyone could comment/like stuff just like in Facebook. And it would be like they are doing peer reviewing for fun. The rewards would be outstanding I would imagine. These people are so into Facebook and Facebook games that productivity would rise due to them trying to compete and earn points and achievements. Would this be taking advantage of the people by 'tricking them into working harder by giving them a game' or would it be viewed as something that would improve happiness at work?

    Read the article

  • UML Diagrams of Multi-Threaded Applications

    - by PersonalNexus
    For single-threaded applications I like to use class diagrams to get an overview of the architecture of that application. This type of diagram, however, hasn’t been very helpful when trying to understand heavily multi-threaded/concurrent applications, for instance because different instances of a class "live" on different threads (meaning accessing an instance is save only from the one thread it lives on). Consequently, associations between classes don’t necessarily mean that I can call methods on those objects, but instead I have to make that call on the target object's thread. Most literature I have dug up on the topic such as Designing Concurrent, Distributed, and Real-Time Applications with UML by Hassan Gomaa had some nice ideas, such as drawing thread boundaries into object diagrams, but overall seemed a bit too academic and wordy to be really useful. I don’t want to use these diagrams as a high-level view of the problem domain, but rather as a detailed description of my classes/objects, their interactions and the limitations due to thread-boundaries I mentioned above. I would therefore like to know: What types of diagrams have you found to be most helpful in understanding multi-threaded applications? Are there any extensions to classic UML that take into account the peculiarities of multi-threaded applications, e.g. through annotations illustrating that some objects might live in a certain thread while others have no thread-affinity; some fields of an object may be read from any thread, but written to only from one; some methods are synchronous and return a result while others are asynchronous that get requests queued up and return results for instance via a callback on a different thread.

    Read the article

  • developers-designers-testers interaction [closed]

    - by user29124
    Sorry for my bad English, and also you may not read this and waste your time, because it is just a lament of layman developer... Seems no one want to learn anything at my workplace. We have Mantis bug tracker, but our testers use google-docs for reports and only developers and team lead report bugs in Mantis. We have SVN for version control and use Smarty as template system, but our designers give us pure HTML (sometimes it's ugly for programmers, but mostly it's OK) in archives, and changes to design made by programmers go nowhere (I mean designers use their own obsolete HTML and CSS most of the time). We have a testing environment but designers don't have access with restricted accounts to it. So we can only ask them where to look for the problem and then investigate the problem by ourselves (and made changes to CSS by ourselves (that go nowhere most of the time...)). I will not mention legacy code without documentation, tests, or any requirements, just an absence of real interaction in triangle programmers-designers-testers. I'm not talking about using HAML, SASS, continuous integration, or something else, just about using basic tools by all participants of the development process. Maybe the absence of communication is not a problem in short-time projects, which will finish up in 2 months time but rather on the types of projects that lasts for years. Any comments please...

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Connecting Clinical and Administrative Processes: Oracle SOA Suite for Healthcare Integration

    - by Mala Ramakrishnan
    One of the biggest IT challenges facing today’s health care industry is the difficulty finding reliable, secure, and cost-effective ways to exchange information. Payers and providers need versatile platforms for enterprise-wide information sharing. Clinicians require accurate information to provide quality care to patients while administrators need integrated information for all facets of the business operation. Both sides of the organization must be able to access information from research and development systems, practice management systems, claims systems, financial systems, and many others. Externally, these organizations must share claims data, patient records, pharmaceutical data, lab reports, and diagnostic information among third party entities—all while complying with emerging standards for formatting, processing, and storing electronic health records (EHR). Service-oriented architecture (SOA) enables developers to integrate many types of software applications, databases and computing platforms within a particular health network as well as with community, state, and national health information exchanges. The Oracle SOA Suite for healthcare integration is designed to provide healthcare organizations with comprehensive integration capabilities within a unified middleware platform, as well as with healthcare libraries and templates for streamlining healthcare IT projects. It reduces the need for specialized skills and enforces an enterprise-wide view of critical healthcare data.  Here is a new white paper that details more about this offering: Oracle SOA Suite for Healthcare Integration

    Read the article

  • MySQL – Introduction to CONCAT and CONCAT_WS functions

    - by Pinal Dave
    MySQL supports two types of concatenation functions. They are CONCAT and CONCAT_WS CONCAT function just concats all the argument values as such SELECT CONCAT('Television','Mobile','Furniture'); The above code returns the following TelevisionMobileFurniture If you want to concatenate them with a comma, either you need to specify the comma at the end of each value, or pass comma as an argument along with the values SELECT CONCAT('Television,','Mobile,','Furniture'); SELECT CONCAT('Television',',','Mobile',',','Furniture'); Both the above return the following Television,Mobile,Furniture However you can omit the extra work by using CONCAT_WS function. It stands for Concatenate with separator. This is very similar to CONCAT function, but accepts separator as the first argument. SELECT CONCAT_WS(',','Television','Mobile','Furniture'); The result is Television,Mobile,Furniture If you want pipeline as a separator, you can use SELECT CONCAT_WS('|','Television','Mobile','Furniture'); The result is Television|Mobile|Furniture So CONCAT_WS is very flexible in concatenating values along with separate. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • SQL – Download FREE Book – Data Access for HighlyScalable Solutions: Using SQL, NoSQL, and Polyglot Persistence

    - by Pinal Dave
    Recently I was preparing for Big Data and I ended up on very interesting read for everybody. This is created by Microsoft and it is indeed a fantastic read as per my opinion. It took me some time to read this entire book but it was worth reading this as it tried to answer two of the very interesting questions related to muscle. Here is the abstract from the book: Organizations seeking to use a NoSQL database are therefore faced with a twofold challenge: • Which NoSQL database(s) best meet(s) the needs of the organization? • How does an organization integrate a NoSQL database into its solutions? As I keep on reading the book, I find it very interesting and informative. I suggest if you have time this weekend, download the book and read it. This guide focuses on the most common types of NoSQL database currently available, describes the situations for which they are most suited, and shows examples of how you might incorporate them into a business application. The guide summarizes the experiences of a fictitious organization named Adventure Works, who implemented a solution that comprised an assortment of different databases. Download Data Access for HighlyScalable Solutions:  Using SQL, NoSQL,  and Polyglot Persistence While we are talking about Big Data and NoSQL do not forget to check out my tomorrow’s blog as I am going to talk about the same subject and it will be very interesting. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, NoSQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do I initialize a Scala map with more than 4 initial elements in Java?

    - by GlenPeterson
    For 4 or fewer elements, something like this works (or at least compiles): import scala.collection.immutable.Map; Map<String,String> HAI_MAP = new Map4<>("Hello", "World", "Happy", "Birthday", "Merry", "XMas", "Bye", "For Now"); For a 5th element I could do this: Map<String,String> b = HAI_MAP.$plus(new Tuple2<>("Later", "Aligator")); But I want to know how to initialize an immutable map with 5 or more elements and I'm flailing in Type-hell. Partial Solution I thought I'd figure this out quickly by compiling what I wanted in Scala, then decompiling the resultant class files. Here's the scala: object JavaMapTest { def main(args: Array[String]) = { val HAI_MAP = Map(("Hello", "World"), ("Happy", "Birthday"), ("Merry", "XMas"), ("Bye", "For Now"), ("Later", "Aligator")) println("My map is: " + HAI_MAP) } } But the decompiler gave me something that has two periods in a row and thus won't compile (I don't think this is valid Java): scala.collection.immutable.Map HAI_MAP = (scala.collection.immutable.Map) scala.Predef..MODULE$.Map().apply(scala.Predef..MODULE$.wrapRefArray( scala.Predef.wrapRefArray( (Object[])new Tuple2[] { new Tuple2("Hello", "World"), new Tuple2("Happy", "Birthday"), new Tuple2("Merry", "XMas"), new Tuple2("Bye", "For Now"), new Tuple2("Later", "Aligator") })); I'm really baffled by the two periods in this: scala.Predef..MODULE$ I asked about it on #java on Freenode and they said the .. looked like a decompiler bug. It doesn't seem to want to compile, so I think they are probably right. I'm running into it when I try to browse interfaces in IntelliJ and am just generally lost. Based on my experimentation, the following is valid: Tuple2[] x = new Tuple2[] { new Tuple2<String,String>("Hello", "World"), new Tuple2<String,String>("Happy", "Birthday"), new Tuple2<String,String>("Merry", "XMas"), new Tuple2<String,String>("Bye", "For Now"), new Tuple2<String,String>("Later", "Aligator") }; scala.collection.mutable.WrappedArray<Tuple2> y = scala.Predef.wrapRefArray(x); There is even a WrappedArray.toMap() method but the types of the signature are complicated and I'm running into the double-period problem there too when I try to research the interfaces from Java.

    Read the article

  • CEO Taken Captive in His Own Factory?

    - by Stephen Slade
    Last Friday was no ordinary day for Chip Starnes, the 42 year old factory owner of Specialty Medical Supplies in China. He recently announced movement of some of the production of their diabetes testing equipment from Beijing to Mumbai India.  Of the 110 employees at the facility, about 80 protested by blocking the doors and refusing to let Chip Starnes out of the facility.  He has been trapped in his office several days now.  The employees think the factory was closing but Mr. Starnes said it was not. Mis-information? Poor communications? Work-stoppage. This is a good example of supply chain disruption. Parked cars are blocking the entrance to the facility, front gates are chained close, the CEO a prisoner in his own factory. Chip Starnes was presented with documents to sign in Chinese indicating he would pay severance and other demands he did not understand, possibly bankrupting the company.    If you depend on supply from China and other foreign suppliers, how reliable are your sources? For example how are the shopfloor employee relations? Is it possible to predict these types of HR risks and plan around them? What are your contingencies? It's important to ask the right questions and hear good answers. Having tools in place to rapidly evaluate, assess and react to these disruptions are the keys to survival. Hear how leading organizations are reinforcing their supply chains and mitigating risk through technology with Oracle's latest release of Oracle Supply Chain Management. Source: WSJ pg.B1, June 25, 2013

    Read the article

  • Discuss: PLs are characterised by which (iso)morphisms are implemented

    - by Yttrill
    I am interested to hear discussion of the proposition summarised in the title. As we know programming language constructions admit a vast number of isomorphisms. In some languages in some places in the translation process some of these isomorphisms are implemented, whilst others require code to be written to implement them. For example, in my language Felix, the isomorphism between a type T and a tuple of one element of type T is implemented, meaning the two types are indistinguishable (identical). Similarly, a tuple of N values of the same type is not merely isomorphic to an array, it is an array: the isomorphism is implemented by the compiler. Many other isomorphisms are not implemented for example there is an isomorphism expressed by the following client code: match v with | ((?x,?y),?z = x,(y,z) // Felix match v with | (x,y), - x,(y,z) (* Ocaml *) As another example, a type constructor C of int in Felix may be used directly as a function, whilst in Ocaml you must write a wrapper: let c x = C x Another isomorphism Felix implements is the elimination of unit values, including those in tuples: Felix can do this because (most) polymorphic values are monomorphised which can be done because it is a whole program analyser, Ocaml, for example, cannot do this easily because it supports separate compilation. For the same reason Felix performs type-class dispatch at compile time whilst Haskell passes around dictionaries. There are some quite surprising issues here. For example an array is just a tuple, and tuples can be indexed at run time using a match and returning a value of a corresponding sum type. Indeed, to be correct the index used is in fact a case of unit sum with N summands, rather than an integer. Yet, in a real implementation, if the tuple is an array the index is replaced by an integer with a range check, and the result type is replaced by the common argument type of all the constructors: two isomorphisms are involved here, but they're implemented partly in the compiler translation and partly at run time.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Adding complexity by generalising: how far should you go?

    - by marcog
    Reference question: http://stackoverflow.com/questions/4303813/help-with-interview-question The above question asked to solve a problem for an NxN matrix. While there was an easy solution, I gave a more general solution to solve the more general problem for an NxM matrix. A handful of people commented that this generalisation was bad because it made the solution more complex. One such comment is voted +8. Putting aside the hard-to-explain voting effects on SO, there are two types of complexity to be considered here: Runtime complexity, i.e. how fast does the code run Code complexity, i.e. how difficult is the code to read and understand The question of runtime complexity is something that requires a better understanding of the input data today and what it might look like in the future, taking the various growth factors into account where necessary. The question of code complexity is the one I'm interested in here. By generalising the solution, we avoid having to rewrite it in the event that the constraints change. However, at the same time it can often result in complicating the code. In the reference question, the code for NxN is easy to understand for any competent programmer, but the NxM case (unless documented well) could easily confuse someone coming across the code for the first time. So, my question is this: Where should you draw the line between generalising and keeping the code easy to understand?

    Read the article

  • Example of persisting an inheritance relationship using ORM

    - by Schemer
    I have some experience with OOP and RDBs, but very little exposure to web programming. I am trying to understand what non-trivial types of problems are solved by ORM. Of course, I am familiar with the need for data persistence, but I have never encountered a need for persisting relationships between objects, a situation which is indicated in many online articles about ORM. I am not asking about the process of persisting a POJO to a database and restoring it later. Nor am I asking about why ORM frameworks are useful -- or a pain in the butt -- for doing so. I am particularly interested in how the need arises to persist and restore relationships between objects. In various documentation, I have seen many examples of persisting POJOs to a database, but the examples have all been for only very simple objects that are essentially nothing more than records anyway: a constructor, some private fields, and getter/setter methods. The motivation for persisting such an "object-record" seems obvious and trivial. This example: Hibernate ORM Tutorial offers such an example, but goes on to discuss mismatch issues of granularity, inheritance, identity, associations, and navigation that are not motivated by the example. If someone could offer a toy example of an instance where, say, the need arises to persist an inheritance relationship, I would be grateful. This might be blindingly obvious for anyone who has already encountered this situation but I have not and a great deal of searching and reading have not turned up any examples.

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • github team workflow - to fork or not?

    - by aporat
    We're a small team of web developers currently using subversion but soon we're making a switch to github. I'm looking at different types of github workflows, and we're not sure if the whole forking concept in github for each developer is such a good idea for us. If we use forks, I understand each developer will have his own private remote & local repositories. I'm worried it will make pushing changesets hard and too complex. Also, my biggest concern is that it will force each developer to have 2 remotes: origin (which is the remote fork) and an upstream (which is used to "sync" changes from the main repository). Not sure if it's such a easy way to do things. This is similar to the workflow explained here: https://github.com/usm-data-analysis/usm-data-analysis.github.com/wiki/Git-workflow If we don't use forks, we can probably get by fine by using a central repo creating a branch for each task we're working on, and merge them into the development branch on the same repository. It means we won't be able to restrict merging of branches and might be a little messy to have many branches on the central repository. Any suggestions from teams who tried both workflow?

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >