Search Results

Search found 97400 results on 3896 pages for 'application data'.

Page 133/3896 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Packaging Swing apps with integrated JavaFX content

    - by igor
    JavaFX provides a lot of interesting capabilities for developing rich client applications in Java, but what if you are working on an existing Swing application and you want to take advantage of these new features?  Maybe you want to use one or two controls like the LineChart or a MediaView.  Maybe you want to embed a large Scene Graph as an initial step in porting your application to FX.  A hybrid Swing/FX application might just be the answer. Developing a hybrid Swing + JavaFX application is not terribly difficult, but until recently the deployment of hybrid applications has not simple as a "pure" JavaFX application.  The existing tools focused on packaging FX Applications, or Swing applications - they did not account for hybrid applications. But with JavaFX 2.2 the tools include support for this hybrid application use case.  Solution  In JavaFX 2.2 we extended the packaging ant tasks to greatly simplify deploying hybrid applications.  You now use the same deployment approach as you would for pure JavaFX applications.  Just bundle your main application jar with the fx:jar ant task and then generate html/jnlp files using fx:deploy.  The only difference is setting toolkit attribute for the fx:application tag as shown below: <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/>  The value of ${main.class} in the example above is your application class which has a main method.  It does not need to extend JavaFX Application class. The resulting package provides support for the same set of execution modes as a package for a JavaFX application, although the packages which are created are not identical to the packages created for a pure FX application.  You will see two JNLP files generated in the case of a hybrid application - one for use from Swing applet and another for the webstart launch.  Note that these improvements do not alter the set of features available to Swing applications. The packaging tools just make it easier to use the advanced features of JavaFX in your Swing application. The same limits still apply, for example a Swing application can not use JavaFX Preloaders and code changes are necessary to support HTML splash screens. Why should I use the JavaFX ant tasks for packaging my Swing application?  While using FX packaging tool for a Swing application may seem like a mismatch at face value, there are some really good reasons to use this approach.  The primary justification for our packaging tools is to simplify the creation of your application artifacts, and to reduce manual errors.  Plus, no one should have to write JNLP by hand. Some specific benefits include: Your application jar will include a launcher program.  This improves your standalone launch by: checking for the JavaFX runtime guiding the user through any necessary installations setting the system proxy for Java The ant tasks will generate JNLP and HTML files for your swing app: avoids learning unnecessary details about JNLP, and eliminates the error-prone hand editing of JNLP files simplifies using advanced features like embedding JNLP and signing jars as BLOBs to improve launch performance.you can also embed the signing certificate details to improve the user's experience  allows the use of web page templates to inject the generated code directly into your actual web page instead of being forced to copy/paste the generated code snippets. What about native packing? Absolutely!  The very same ant task can generate a native bundle for a Swing application with JavaFX content.  Try running one of these sample native bundles for the "SwingInterop" FX example: exe and dmg.   I also used another feature on these examples: a click-through license agreement for .exe installers and OS X DMG drag installers. Small Caveat This packaging procedure is optimized around using the JavaFX packaging tools for your entire Swing application.  If you are trying to embed JavaFX content into existing project (with an existing build/packing process) then you may need to experiment in order to find the best way to integrate the JavaFX packaging steps into your existing build procedure. As long as you can use ant in your build process this should be a workable approach. It some cases solution could be less than ideal. For example, you need to use fx:jar to package your main jar file in order to produce a double-clickable jar or a native bundle.  The jar will be created from scratch, but you may already be creating the main jar file with a custom manifest.  This may lead to some redundant steps in your build process.  Hopefully the benefits will outweigh the problems. This is an area of ongoing development for the team, and we will continue to refine and improve both the tools and the process. Please share your experiences and suggestions with us.  You can comment here on the blog or file issues to JIRA. Sample code Here is the full ant code used to package SwingInterop.  You can grab latest JavaFX samples and try it yourself:  <target name="-post-jar"> <taskdef resource="com/sun/javafx/tools/ant/antlib.xml" uri="javafx:com.sun.javafx.tools.ant" classpath="${javafx.tools.ant.jar}"/> <!-- Mark application as Swing-based --> <fx:application id="swingFXApp" mainClass="${main.class}" toolkit="swing"/> <!-- Create doubleclickable jar file with embedded launcher --> <fx:jar destfile="${dist.jar}"> <fileset dir="${build.classes.dir}"/> <fx:application refid="swingFXApp" name="SwingInterop"/> <manifest> <attribute name="Implementation-Vendor" value="${application.vendor}"/> <attribute name="Implementation-Title" value="${application.title}"/> <attribute name="Implementation-Version" value="1.0"/> </manifest> </fx:jar> <!-- sign application jar. Use new self signed certificate --> <delete file="${build.dir}/test.keystore"/> <genkey alias="TestAlias" storepass="xyz123" keystore="${build.dir}/test.keystore" dname="CN=Samples, OU=JavaFX Dev, O=Oracle, C=US"/> <fx:signjar keystore="${build.dir}/test.keystore" alias="TestAlias" storepass="xyz123"> <fileset file="${dist.jar}"/> </fx:signjar> <!-- generate JNLPs, HTML and native bundles --> <fx:deploy width="960" height="720" includeDT="true" nativeBundles="all" outdir="${basedir}/${dist.dir}" embedJNLP="true" outfile="${application.title}"> <fx:application refId="swingFXApp"/> <fx:resources> <fx:fileset dir="${basedir}/${dist.dir}" includes="SwingInterop.jar"/> </fx:resources> <fx:permissions/> <info title="Sample app: ${application.title}" vendor="${application.vendor}"/> </fx:deploy> </target>

    Read the article

  • AJAX Return Problem from data sent via jQuery.ajax

    - by Anthony Garand
    I am trying to receive a json object back from php after sending data to the php file from the js file. All I get is undefined. Here are the contents of the php and js file. data.php <?php $action = $_GET['user']; $data = array( "first_name" = "Anthony", "last_name" = "Garand", "email" = "[email protected]", "password" = "changeme"); switch ($action) { case '[email protected]': echo $_GET['callback'] . '('. json_encode($data) . ');'; break; } ? core.js $(document).ready(function(){ $.ajax({ url: "data.php", data: {"user":"[email protected]"}, context: document.body, data: "jsonp", success: function(data){renderData(data);} }); }); function renderData(data) { document.write(data.first_name); }

    Read the article

  • Entity Framework 4 POCO entities in separate assembly, Dynamic Data Website?

    - by steve.macdonald
    Basically I want to use a dynamic data website to maintain data in an EF4 model where the entities are in their own assembly. Model and context are in another assembly. I tried this http://stackoverflow.com/questions/2282916/entity-framework-4-self-tracking-entities-asp-net-dynamic-data-error but get an "ambiguous match" error from reflection: System.Reflection.AmbiguousMatchException was unhandled by user code Message=Ambiguous match found. Source=mscorlib StackTrace: at System.RuntimeType.GetPropertyImpl(String name, BindingFlags bindingAttr, Binder binder, Type returnType, Type[] types, ParameterModifier[] modifiers) at System.Type.GetProperty(String name) at System.Web.DynamicData.ModelProviders.EFTableProvider..ctor(EFDataModelProvider dataModel, EntitySet entitySet, EntityType entityType, Type entityClrType, Type parentEntityClrType, Type rootEntityClrType, String name) at System.Web.DynamicData.ModelProviders.EFDataModelProvider.CreateTableProvider(EntitySet entitySet, EntityType entityType) at System.Web.DynamicData.ModelProviders.EFDataModelProvider..ctor(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.ModelProviders.SchemaCreator.CreateDataModel(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.MetaModel.RegisterContext(Func`1 contextFactory, ContextConfiguration configuration) at WebApplication1.Global.RegisterRoutes(RouteCollection routes) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 42 at WebApplication1.Global.Application_Start(Object sender, EventArgs e) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 78 InnerException:

    Read the article

  • Properties in partial class not appearing in Data Sources window!

    - by Tim Murphy
    Entity Framework has created the required partial classes. I can add these partial classes to the Data Sources window and the properties display as expected. However, if I extend any of the classes in a separate source file these properties do not appear in the Data Sources window even after a build and refresh. All properties in partial classes across source files work as expected in the Data Sources window except when the partial class has been created with EF. EDIT: After removing the offending table for edm designer, adding back in it all works are expected. Hardly a long term solution. Anyone else come across a similar problem?

    Read the article

  • Data validation: fail fast, fail early vs. complete validation

    - by Vivin Paliath
    Regarding data validation, I've heard that the options are to "fail fast, fail early" or "complete validation". The first approach fails on the very first validation error, whereas the second one builds up a list of failures and presents it. I'm wondering about this in the context of both server-side and client-side data validation. Which method is appropriate in what context, and why? My personal preference for data-validation on the client-side is the second method which informs the user of all failing constraints. I'm not informed enough to have an opinion about the server-side, although I would imagine it depends on the business logic involved.

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • How can I use System.Web.Caching.Cache in a Console application?

    - by Ron Klein
    Context: .Net 3.5, C# I'd like to have caching mechanism in my Console application. Instead of re-inventing the wheel, I'd like to use System.Web.Caching.Cache (and that's a final decision, I can't use other caching framework, don't ask why). However, it looks like System.Web.Caching.Cache is supposed to run only in a valid HTTP context. My very simple snippet looks like this: using System; using System.Web.Caching; using System.Web; Cache c = new Cache(); try { c.Insert("a", 123); } catch (Exception ex) { Console.WriteLine("cannot insert to cache, exception:"); Console.WriteLine(ex); } and the result is: cannot insert to cache, exception: System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Caching.Cache.Insert(String key, Object value) at MyClass.RunSnippet() So obviously, I'm doing something wrong here. Any ideas? Update: +1 to most answers, getting the cache via static methods is the correct usage, namely HttpRuntime.Cache and HttpContext.Current.Cache. Thank you all!

    Read the article

  • How to unit test internals (organization) of a data structure?

    - by Herms
    I've started working on a little ruby project that will have sample implementations of a number of different data structures and algorithms. Right now it's just for me to refresh on stuff I haven't done for a while, but I'm hoping to have it set up kind of like Ruby Koans, with a bunch of unit tests written for the data structures but the implementations empty (with full implementations in another branch). It could then be used as a nice learning tool or code kata. However, I'm having trouble coming up with a good way to write the tests. I can't just test the public behavior as that won't necessarily tell me about the implementation, and that's kind of important here. For example, the public interfaces of a normal BST and a Red-Black tree would be the same, but the RB Tree has very specific data organization requirements. How would I test that?

    Read the article

  • R: How can I reorder the rows of a matrix, data.frame or vector according to another one.

    - by John
    test1 <- as.matrix(c(1, 2, 3, 4, 5)) row.names(test1) <- c("a", "b", "c", "d", "e") test2 <- as.matrix(c(6, 7, 8, 9, 10)) row.names(test2) <- c("e", "d", "c", "b", "a") test1 [,1] a 1 d 2 c 3 b 4 e 5 test2 [,1] e 6 d 7 c 8 b 9 a 10 How can I reorder test2 so that the rows are in the same order as test1? e.g: test2 [,1] a 10 d 7 c 8 b 9 e 6 I tried to use the reorder function with: reorder (test1, test2) but I could not figure out the correct syntax. I see that reorder takes a vector, and I'm here using a matrix. My real data has one character vector and another as a data.frame. I figured that the data structure would not matter too much for this example above, I just need help with the syntax and can adapt it to my real problem.

    Read the article

  • How to save objects using Multi-Threading in Core Data?

    - by Konstantin
    I'm getting some data from the web service and saving it in the core data. This workflow looks like this: get xml feed go over every item in that feed, create a new ManagedObject for every feed item download some big binary data for every item and save it into ManagedObject call [managedObjectContext save:] Now, the problem is of course the performance - everything runs on the main thread. I'd like to re-factor as much as possible to another thread, but I'm not sure where I should start. Is it OK to put everything (1-4) to the separate thread?

    Read the article

  • multiple move operations and data processes in work thread

    - by younevertell
    main thread-- start workthread--StartStage(get list of positions for data process) -- move to one position -- data sampling*strong text*-- data collection--data analysis------data sampling*strong text* basically, work thread does the data sampling*strong text*-- data collection--data analysis------data sampling*strong text* loop for one positioin until press stop or target is obtained. my questions: After work thread finishs the loop for one positioin, it would end itself. now how to make the work thread moves to the next position to do the data process loop after work thread finish one position work, would not end itself until data process for all the positions are done? Thanks in advance!

    Read the article

  • R: What are the best functions to deal with concatenating and averaging values in a data.frame?

    - by John
    I have a data.frame from this code: my_df = data.frame("read_time" = c("2010-02-15", "2010-02-15", "2010-02-16", "2010-02-16", "2010-02-16", "2010-02-17"), "OD" = c(0.1, 0.2, 0.1, 0.2, 0.4, 0.5) ) which produces this: > my_df read_time OD 1 2010-02-15 0.1 2 2010-02-15 0.2 3 2010-02-16 0.1 4 2010-02-16 0.2 5 2010-02-16 0.4 6 2010-02-17 0.5 I want to average the OD column over each distinct read_time (notice some are replicated others are not) and I also would like to calculate the standard deviation, producing a table like this: > my_df read_time OD stdev 1 2010-02-15 0.15 0.05 5 2010-02-16 0.3 0.1 6 2010-02-17 0.5 0 Which are the best functions to deal with concatenating such values in a data.frame?

    Read the article

  • What's the reason why core data takes care of the life-cycle of modeled properties?

    - by mystify
    The docs say that I should not release any modeled property in -dealloc. For me, that feels like violating the big memory management rules. I see a big retain in the header and no -release, because Core Data seems to do it at any other time. Is it because Core Data may drop the value of a property dynamically, at any time when needed? And what's Core Data doing when dropping an managed object? If there's no -dealloc, then how and when are the properties getting freed up?

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • When to use SOAP over REST

    So, how does REST based services differ from SOAP based services, and when should you use SOAP? Representational State Transfer (REST) implements the standard HTTP/HTTPS as an interface allowing clients to obtain access to resources based on requested URIs. An example of a URI may look like this http://mydomain.com/service/method?parameter=var1&parameter=var2. It is important to note that REST based services are stateless because http/https is natively stateless. One of the many benefits for implementing HTTP/HTTPS as an interface is can be found in caching. Caching can be done on a web service much like caching is done on requested web pages. Caching allows for reduced web server processing and increased response times because content is already processed and stored for immediate access. Typical actions performed by REST based services include generic CRUD (Create, Read, Update, and Delete) operations and operations that do not require state. Simple Object Access Protocol (SOAP) on the other hand uses a generic interface in order to transport messages. Unlike REST, SOAP can use HTTP/HTTPS, SMTP, JMS, or any other standard transport protocols. Furthermore, SOAP utilizes XML in the following ways: Define a message Defines how a message is to be processed Defines the encoding of a message Lays out procedure calls and responses As REST aligns more with a Resource View, SOAP aligns more with a Method View in that business logic is exposed as methods typically through SOAP web service because they can retain state. In addition, SOAP requests are not cached therefore every request will be processed by the server. As stated before Soap does retain state and this gives it a special advantage over REST for services that need to preform transactions where multiple calls to a service are need in order to complete a task. Additionally, SOAP is more ideal for enterprise level services that implement standard exchange formats in the form of contracts due to the fact that REST does not currently support this. A real world example of where SOAP is preferred over REST can be seen in the banking industry where money is transferred from one account to another. SOAP would allow a bank to perform a transaction on an account and if the transaction failed, SOAP would automatically retry the transaction ensuring that the request was completed. Unfortunately, with REST, failed service calls must be handled manually by the requesting application. References: Francia, S. (2010). SOAP vs. REST. Retrieved 11 20, 2011, from spf13: http://spf13.com/post/soap-vs-rest Rozlog, M. (2010). REST and SOAP: When Should I Use Each (or Both)? Retrieved 11 20, 2011, from Infoq.com: http://www.infoq.com/articles/rest-soap-when-to-use-each

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Current State EA: Focus on the Integration!!!

    - by Eric A. Stephens
    A recent project has me at the front end of a large implementation effort covering multiple software components. In addition to the challenges of integrating 15-20 separate and new software components there is the challenge of integrating the portfolio into an existing environment. Like other clients I've worked with and other environments I've worked in for many years, this is typical. The applications are undocumented and under patched leading to a mystery for any architect leading change.  We can boil down most architecture development methodologies (ADM) into first understanding the current/baseline state and then envisioning one or more future states. Many pundits emphasize the need to focus on the future/target states. I agree since enterprise architecture (EA) is about where you are going and not so much where you have been. But to be effective in the future, I contend some focused time needs to be spent on the current state. And specifically on the integration. Integration is always the difficult part of a project (I might put it more coarsely at a cocktail party). While I don't have a case study, my anecdotal experience suggests poorly integrated application portfolios tend to cost more to operate and create entropy when trying to respond to new changes and opportunities. In the aforementioned project, I was able to get one of our EAs assigned to focus on just integration almost immediately. While we're still early in the process, this EA is uncovering all sorts of information that will greatly assist our future state planning for this solution. This information is driving early decision making that we anticipate will accelerate our efforts moving forward. #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; } #next_pages_container { width: 5px; hight: 5px; position: absolute; top: -100px; left: -100px; z-index: 2147483647 !important; }

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • PHP running as a FastCGI application (php-cgi) - how to issue concurrent requests?

    - by Sbm007
    Some background information: I'm writing my own webserver in Java and a couple of days ago I asked on SO how exactly Apache interfaces with PHP, so I can implement PHP support. I learnt that FastCGI is the best approach (since mod_php is not an option). So I have looked at the FastCGI protocol specification and have managed to write a working FastCGI wrapper for my server. I have tested phpinfo() and it works, in fact all PHP functions seem to work just fine (posting data, sessions, date/time, etc etc). My webserver is able to serve requests concurrently (ie user1 can retrieve file1.html at the same time as user2 requesting some_large_binary_file.zip), it does this by spawning a new Java thread for each user request (terminating when completed or user connection with client is cancelled). However, it cannot deal with 2 (or more) FastCGI requests at the same time. What it does is, it queues them up, so when request 1 is completed immediately thereafter it starts processing request 2. I tested this with 2 PHP pages, one contains sleep(10) and the other phpinfo(). How would I go about dealing with multiple requests as I know it can be done (PHP under IIS runs as FastCGI and it can deal with multiple requests just fine). Some more info: I am coding under windows and my batch file used to execute php-cgi.exe contains: set PHP_FCGI_CHILDREN=8 set PHP_FCGI_MAX_REQUESTS=500 php-cgi.exe -b 9000 But it does not spawn 8 children, the service simply terminates after 500 requests. I have done research and from Wikipedia: Processing of multiple requests simultaneously is achieved either by using a single connection with internal multiplexing (ie. multiple requests over a single connection) and/or by using multiple connections Now clearly the multiple connections isn't working for me, as everytime a client requests something that involves FastCGI it creates a new socket to the FastCGI application, but it does not work concurrently (it queues them up instead). I know that internal multiplexing of FastCGI requests under the same connection is accomplished by issuing each unique FastCGI request with a different request ID. (also see the last 3 paragraphs of 'The Communication Protocol' heading in this article). I have not tested this, but how would I go about implementing that? I take it I need some kind of FastCGI Java thread which contains a Map of some sort and a static function which I can use to add requests to. Then in the Thread's run() function it would have a while loop and for every cycle it would check whether the Map contains new requests, if so it would assign them a request ID and write them to the FastCGI stream. And then wait for input etc etc, As you can see this becomes too complicated. Does anyone know the correct way of doing this? Or any thoughts at all? Thanks very much. Note, if required I can supply the code for my FastCGI wrapper.

    Read the article

  • android communication between two applications

    - by androidTesting
    Hello, i need some help in how to start developing two android applications (on one phone) which communicate with each other. 1. Application A sends a string to application B 2. Application B receives the string for example "startClassOne", app B using a method starts classOne and gets the result. The result is send back (again as string!) to Application A. 3. Application A writes in console the received string from B.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >