Search Results

Search found 58779 results on 2352 pages for 'cleaned data'.

Page 90/2352 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • AJAX Return Problem from data sent via jQuery.ajax

    - by Anthony Garand
    I am trying to receive a json object back from php after sending data to the php file from the js file. All I get is undefined. Here are the contents of the php and js file. data.php <?php $action = $_GET['user']; $data = array( "first_name" = "Anthony", "last_name" = "Garand", "email" = "[email protected]", "password" = "changeme"); switch ($action) { case '[email protected]': echo $_GET['callback'] . '('. json_encode($data) . ');'; break; } ? core.js $(document).ready(function(){ $.ajax({ url: "data.php", data: {"user":"[email protected]"}, context: document.body, data: "jsonp", success: function(data){renderData(data);} }); }); function renderData(data) { document.write(data.first_name); }

    Read the article

  • Entity Framework 4 POCO entities in separate assembly, Dynamic Data Website?

    - by steve.macdonald
    Basically I want to use a dynamic data website to maintain data in an EF4 model where the entities are in their own assembly. Model and context are in another assembly. I tried this http://stackoverflow.com/questions/2282916/entity-framework-4-self-tracking-entities-asp-net-dynamic-data-error but get an "ambiguous match" error from reflection: System.Reflection.AmbiguousMatchException was unhandled by user code Message=Ambiguous match found. Source=mscorlib StackTrace: at System.RuntimeType.GetPropertyImpl(String name, BindingFlags bindingAttr, Binder binder, Type returnType, Type[] types, ParameterModifier[] modifiers) at System.Type.GetProperty(String name) at System.Web.DynamicData.ModelProviders.EFTableProvider..ctor(EFDataModelProvider dataModel, EntitySet entitySet, EntityType entityType, Type entityClrType, Type parentEntityClrType, Type rootEntityClrType, String name) at System.Web.DynamicData.ModelProviders.EFDataModelProvider.CreateTableProvider(EntitySet entitySet, EntityType entityType) at System.Web.DynamicData.ModelProviders.EFDataModelProvider..ctor(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.ModelProviders.SchemaCreator.CreateDataModel(Object contextInstance, Func1 contextFactory) at System.Web.DynamicData.MetaModel.RegisterContext(Func`1 contextFactory, ContextConfiguration configuration) at WebApplication1.Global.RegisterRoutes(RouteCollection routes) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 42 at WebApplication1.Global.Application_Start(Object sender, EventArgs e) in C:\dev\Puffin\Puffin.Prototype.Web\Global.asax.cs:line 78 InnerException:

    Read the article

  • Properties in partial class not appearing in Data Sources window!

    - by Tim Murphy
    Entity Framework has created the required partial classes. I can add these partial classes to the Data Sources window and the properties display as expected. However, if I extend any of the classes in a separate source file these properties do not appear in the Data Sources window even after a build and refresh. All properties in partial classes across source files work as expected in the Data Sources window except when the partial class has been created with EF. EDIT: After removing the offending table for edm designer, adding back in it all works are expected. Hardly a long term solution. Anyone else come across a similar problem?

    Read the article

  • Data validation: fail fast, fail early vs. complete validation

    - by Vivin Paliath
    Regarding data validation, I've heard that the options are to "fail fast, fail early" or "complete validation". The first approach fails on the very first validation error, whereas the second one builds up a list of failures and presents it. I'm wondering about this in the context of both server-side and client-side data validation. Which method is appropriate in what context, and why? My personal preference for data-validation on the client-side is the second method which informs the user of all failing constraints. I'm not informed enough to have an opinion about the server-side, although I would imagine it depends on the business logic involved.

    Read the article

  • Is it Possible to Query Multiple Databases with WCF Data Services?

    - by Mas
    I have data being inserted into multiple databases with the same schema. The multiple databases exist for performance reasons. I need to create a WCF service that a client can use to query the databases. However from the client's point of view, there is only 1 database. By this I mean when a client performs a query, it should query all databases and return the combined results. I also need to provide the flexibility for the client to define its own queries. Therefore I am looking into WCF Data Services, which provides the very nice functionality for client specified queries. So far, it seems that a DataService can only make a query to a single database. I found no override that would allow me to dispatch queries to multiple databases. Does anyone know if it is possible for a WCF Data Service to query against multiple databases with the same schema?

    Read the article

  • How to unit test internals (organization) of a data structure?

    - by Herms
    I've started working on a little ruby project that will have sample implementations of a number of different data structures and algorithms. Right now it's just for me to refresh on stuff I haven't done for a while, but I'm hoping to have it set up kind of like Ruby Koans, with a bunch of unit tests written for the data structures but the implementations empty (with full implementations in another branch). It could then be used as a nice learning tool or code kata. However, I'm having trouble coming up with a good way to write the tests. I can't just test the public behavior as that won't necessarily tell me about the implementation, and that's kind of important here. For example, the public interfaces of a normal BST and a Red-Black tree would be the same, but the RB Tree has very specific data organization requirements. How would I test that?

    Read the article

  • R: How can I reorder the rows of a matrix, data.frame or vector according to another one.

    - by John
    test1 <- as.matrix(c(1, 2, 3, 4, 5)) row.names(test1) <- c("a", "b", "c", "d", "e") test2 <- as.matrix(c(6, 7, 8, 9, 10)) row.names(test2) <- c("e", "d", "c", "b", "a") test1 [,1] a 1 d 2 c 3 b 4 e 5 test2 [,1] e 6 d 7 c 8 b 9 a 10 How can I reorder test2 so that the rows are in the same order as test1? e.g: test2 [,1] a 10 d 7 c 8 b 9 e 6 I tried to use the reorder function with: reorder (test1, test2) but I could not figure out the correct syntax. I see that reorder takes a vector, and I'm here using a matrix. My real data has one character vector and another as a data.frame. I figured that the data structure would not matter too much for this example above, I just need help with the syntax and can adapt it to my real problem.

    Read the article

  • How to save objects using Multi-Threading in Core Data?

    - by Konstantin
    I'm getting some data from the web service and saving it in the core data. This workflow looks like this: get xml feed go over every item in that feed, create a new ManagedObject for every feed item download some big binary data for every item and save it into ManagedObject call [managedObjectContext save:] Now, the problem is of course the performance - everything runs on the main thread. I'd like to re-factor as much as possible to another thread, but I'm not sure where I should start. Is it OK to put everything (1-4) to the separate thread?

    Read the article

  • multiple move operations and data processes in work thread

    - by younevertell
    main thread-- start workthread--StartStage(get list of positions for data process) -- move to one position -- data sampling*strong text*-- data collection--data analysis------data sampling*strong text* basically, work thread does the data sampling*strong text*-- data collection--data analysis------data sampling*strong text* loop for one positioin until press stop or target is obtained. my questions: After work thread finishs the loop for one positioin, it would end itself. now how to make the work thread moves to the next position to do the data process loop after work thread finish one position work, would not end itself until data process for all the positions are done? Thanks in advance!

    Read the article

  • R: What are the best functions to deal with concatenating and averaging values in a data.frame?

    - by John
    I have a data.frame from this code: my_df = data.frame("read_time" = c("2010-02-15", "2010-02-15", "2010-02-16", "2010-02-16", "2010-02-16", "2010-02-17"), "OD" = c(0.1, 0.2, 0.1, 0.2, 0.4, 0.5) ) which produces this: > my_df read_time OD 1 2010-02-15 0.1 2 2010-02-15 0.2 3 2010-02-16 0.1 4 2010-02-16 0.2 5 2010-02-16 0.4 6 2010-02-17 0.5 I want to average the OD column over each distinct read_time (notice some are replicated others are not) and I also would like to calculate the standard deviation, producing a table like this: > my_df read_time OD stdev 1 2010-02-15 0.15 0.05 5 2010-02-16 0.3 0.1 6 2010-02-17 0.5 0 Which are the best functions to deal with concatenating such values in a data.frame?

    Read the article

  • What's the reason why core data takes care of the life-cycle of modeled properties?

    - by mystify
    The docs say that I should not release any modeled property in -dealloc. For me, that feels like violating the big memory management rules. I see a big retain in the header and no -release, because Core Data seems to do it at any other time. Is it because Core Data may drop the value of a property dynamically, at any time when needed? And what's Core Data doing when dropping an managed object? If there's no -dealloc, then how and when are the properties getting freed up?

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Simple GET operation with JSON data in ADF Mobile

    - by PadmajaBhat
    Usecase: This sample uses a RESTful service which contains a GET method that fetches employee details for an employee with given employee ID along with other methods. The data is fetched in JSON format. This RESTful service is then invoked via ADF Mobile and the JSON data thus obtained is parsed and rendered in mobile in a table. Prerequisite: Download JDev build JDEVADF_11.1.2.4.0_GENERIC_130421.1600.6436.1 or higher with mobile support.  Steps: Run EmployeeService.java in JSONService.zip. This is a simple service with a method, getEmpById(id) that takes employee ID as parameter and produces employee details in JSON format. Copy the target URL generated on running this service. The target URL will be as shown below: http://127.0.0.1:7101/JSONService-Project1-context-root/jersey/project1 Now, let us invoke this service in our mobile application. For this, create an ADF Mobile application.  Name the application JSON_SearchByEmpID and finish the wizard. Now, let us create a connection to our service. To do this, we create a URL Connection. Invoke new gallery wizard on ApplicationController project.  Select URL Connection option. In the Create URL Connection window, enter connection name as ‘conn’. For URL endpoint, supply the URL you copied earlier on running the service. Remember to use your system IP instead of localhost. Test the connection and click OK. At this point, a connection to the REST service has been created. Since JSON data is not supported directly in WSDC wizard, we need to invoke the operation through Java code using RestServiceAdapter. For this, in the ApplicationController project, create a Java class called ‘EmployeeDC’. We will be creating DC from this class. Add the following code to the newly created class to invoke the getEmpById method. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public Employee fetchEmpDetails(){ RestServiceAdapter restServiceAdapter = Model.createRestServiceAdapter(); restServiceAdapter.clearRequestProperties(); restServiceAdapter.setConnectionName("conn"); //URL connection created with this name restServiceAdapter.setRequestType(RestServiceAdapter.REQUEST_TYPE_GET); restServiceAdapter.addRequestProperty("Content-Type", "application/json"); restServiceAdapter.addRequestProperty("Accept", "application/json; charset=UTF-8"); restServiceAdapter.setRetryLimit(0); restServiceAdapter.setRequestURI("/getById/"+inputEmpID); String response = ""; JSONBeanSerializationHelper jsonHelper = new JSONBeanSerializationHelper(); try { response = restServiceAdapter.send(""); //Invoke the GET operation System.out.println("Response received!"); Employee responseObject = (Employee) jsonHelper.fromJSON(Employee.class, response); return responseObject; } catch (Exception e) { } return null; } Here, in lines 2 to 9, we create the RestServiceAdapter and set various properties required to invoke the web service. At line 4, we are pointing to the connection ‘conn’ created previously. Since we want to invoke getEmpById method of the service, which is defined by the URL http://IP:7101/REST_Sanity_JSON-Project1-context-root/resources/project1/getById/{id} we are updating the request URI to point to this URI at line 9. inputEmpID is a variable that will hold the value input by the user for employee ID. This we will be creating in a while. As the method we are invoking is a GET operation and consumes json data, these properties are being set in lines 5 through 7. Finally, we are sending the request in line 13. In line 15, we use jsonHelper.fromJSON to convert received JSON data to a Java object. The required Java objects' structure is defined in class Employee.java whose structure is provided later. Since the response from our service is a simple response consisting of attributes like employee Id, name, design etc, we will just return this parsed response (line 16) and use it to create DC. As mentioned previously, we would like the user to input the employee ID for which he/she wants to perform search. So, in the same class, define a variable inputEmpID which will hold the value input by the user. Generate accessors for this variable. Lastly, we need to create Employee class. Employee class will define how we want to structure the JSON object received from the service. To design the Employee class, run the services’ method in the browser or via analyzer using path parameter as 1. This will give you the output JSON structure. Ours is a simple service that returns a JSONObject with a set of data. Hence, Employee class will just contain this set of data defined with the proper data types. Create Employee.java in the same project as EmployeeDC.java and write the below code: package application; import oracle.adfmf.java.beans.PropertyChangeListener; import oracle.adfmf.java.beans.PropertyChangeSupport; public class Employee { private String dept; private String desig; private int id; private String name; private int salary; private PropertyChangeSupport propertyChangeSupport = new PropertyChangeSupport(this); public void setDept(String dept) {         String oldDept = this.dept; this.dept = dept; propertyChangeSupport.firePropertyChange("dept", oldDept, dept); } public String getDept() { return dept; } public void setDesig(String desig) { String oldDesig = this.desig; this.desig = desig; propertyChangeSupport.firePropertyChange("desig", oldDesig, desig); } public String getDesig() { return desig; } public void setId(int id) { int oldId = this.id; this.id = id; propertyChangeSupport.firePropertyChange("id", oldId, id); } public int getId() { return id; } public void setName(String name) { String oldName = this.name; this.name = name; propertyChangeSupport.firePropertyChange("name", oldName, name); } public String getName() { return name; } public void setSalary(int salary) { int oldSalary = this.salary; this.salary = salary; propertyChangeSupport.firePropertyChange("salary", oldSalary, salary); } public int getSalary() { return salary; } public void addPropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.addPropertyChangeListener(l); } public void removePropertyChangeListener(PropertyChangeListener l) { propertyChangeSupport.removePropertyChangeListener(l);     } } Now, let us create a DC out of EmployeeDC.java.  DC as shown below is created. Now, you can design the mobile page as usual and invoke the operation of the service. To design the page, go to ViewController project and locate adfmf-feature.xml. Create a new feature called ‘SearchFeature’ by clicking the plus icon. Go the content tab and add an amx page. Call it SearchPage.amx. Call it SearchPage.amx. Remove primary and secondary buttons as we don’t need them and rename the header. Drag and drop inputEmpID from the DC palette onto Panel Page in the structure pane as input text with label. Next, drop fetchEmpDetails method as an ADF button. For a change, let us display the output in a table component instead of the usual form. However, you will notice that if you drag and drop Employee onto the structure pane, there is no option for ADF Mobile Table. Hence, we will need to create the table on our own. To do this, let us first drop Employee as an ADF Read -Only form. This step is needed to get the required bindings. We will be deleting this form in a while. Now, from the Component palette, search for ‘Table Layout’. Drag and drop this below the command button.  Within the tablelayout, insert ‘Row Layout’ and ‘Cell Format’ components. Final table structure should be as shown below. Here, we have also defined some inline styling to render the UI in a nice manner. <amx:tableLayout id="tl1" borderWidth="2" halign="center" inlineStyle="vertical-align:middle;" width="100%" cellPadding="10"> <amx:rowLayout id="rl1" > <amx:cellFormat id="cf1" width="30%"> <amx:outputText value="#{bindings.dept.hints.label}" id="ot7" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf2"> <amx:outputText value="#{bindings.dept.inputValue}" id="ot8" /> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl2"> <amx:cellFormat id="cf3" width="30%"> <amx:outputText value="#{bindings.desig.hints.label}" id="ot9" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf4" > <amx:outputText value="#{bindings.desig.inputValue}" id="ot10"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl3"> <amx:cellFormat id="cf5" width="30%"> <amx:outputText value="#{bindings.id.hints.label}" id="ot11" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf6" > <amx:outputText value="#{bindings.id.inputValue}" id="ot12"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl4"> <amx:cellFormat id="cf7" width="30%"> <amx:outputText value="#{bindings.name.hints.label}" id="ot13" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf8"> <amx:outputText value="#{bindings.name.inputValue}" id="ot14"/> </amx:cellFormat> </amx:rowLayout> <amx:rowLayout id="rl5"> <amx:cellFormat id="cf9" width="30%"> <amx:outputText value="#{bindings.salary.hints.label}" id="ot15" inlineStyle="color:rgb(0,148,231);"/> </amx:cellFormat> <amx:cellFormat id="cf10"> <amx:outputText value="#{bindings.salary.inputValue}" id="ot16"/> </amx:cellFormat> </amx:rowLayout>     </amx:tableLayout> The values used in the output text of the table come from the bindings obtained from the ADF Form created earlier. As we have used the bindings and don’t need the form anymore, let us delete the form.  One last thing before we deploy. When user changes employee ID, we want to clear the table contents. For this we associate a value change listener with the input text box. Click New in the resulting dialog to create a managed bean. Next, we create a method within the managed bean. For this, click on the New button associated with method. Call the method ‘empIDChange’. Open myClass.java and write the below code in empIDChange(). public void empIDChange(ValueChangeEvent valueChangeEvent) { // Add event code here... //Resetting the values to blank values when employee id changes AdfELContext adfELContext = AdfmfJavaUtilities.getAdfELContext(); ValueExpression ve = AdfmfJavaUtilities.getValueExpression("#{bindings.dept.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.desig.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.id.inputValue}", int.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.name.inputValue}", String.class); ve.setValue(adfELContext, ""); ve = AdfmfJavaUtilities.getValueExpression("#{bindings.salary.inputValue}", int.class); ve.setValue(adfELContext, ""); } That’s it. Deploy the application to android emulator or device. Some snippets from the app.

    Read the article

  • Webcast Q&A: Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the third webcast in our WebCenter in Action webcast series, "Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter", where customer Sean Mattson from HDS and Rob Vandenberg from Oracle Partner Lingotek shared how Oracle WebCenter is powering Hitachi Data System’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Sean Mattson, Hitachi Data Systems  Q: Did you run into any issues in the deployment of the platform?A: There were some challenges, we were one of the first enterprise ‘on premise’ installations for Lingotek and our WebCenter platform also has a lot of custom features.  There were a lot of iterations and back and forth working with Lingotek at first.  We both helped each other, learned a lot and in the end managed to resolve all issues and roll out a very compelling solution for HDS. Q: What has been the biggest benefit your end users have seen?A: Being able to manage and govern the content lifecycle globally and centrally and at the same time enabling the field to update, review and publish the incremental content changes without a lot of touchpoints has helped us streamline and simplify the entire publishing process. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: I wouldn't say resistance as much as skepticism that we could actually deploy an automated and self publishing solution.  Even if a solution is great, adoption of a new process can be a challenge and we are still pursuing our adoption targets.  One of the most important aspects is to include lots of training and support materials and offer as much helpdesk type support as needed to get the field self sufficient and confident in the capabilities of the system.  Rob Vandenberg, Lingotek  Q: Are there any limitations regarding supported languages such as support for French Canadian and Indian languages?A: Lingotek supports all language pairs. Including right to left languages and double byte languages such as Chinese, Japanese and Korean Q: Is the Lingotek solution integrated with the new 11g release of WebCenter Sites? A: Yes! In fact, Lingotek is the first OVI partner for Oracle WebCenter Sites  Q: Can translation memories help to improve the accuracy of machine translation?A: One of the greatest long term strategic benefits of using Lingotek is the accumulation of translation memories, or past human translations. These TMs can be used to "train" statistical machine translation engines to have higher and higher quality. This virtuous cycle is ongoing and will consistently improve both machine and human translations.  Q: We have existing translation memories from previous work with our translation service provider. Can they be easily imported in to the Lingotek solution for re-use? Q: Yes, Lingotek is standards compliant. We support TM import in both the TMX and XLIFF formats. Q: If we use Lingotek as a service to do our professional translation and also use the Lingotek software solution, do we get the translation memories to give us a means of just translating future adds and changes ourselves? A: Yes, all the data is yours, always. Lingotek can provide both the integrated translation software as well as the professional translation services. All the content and translation memories are yours. Q: Can you give us an example of where community translation has proved to be successful?A: The key word here is community. If you have a community that cares about you, your content, and the rest of the community, then community translation can work for you. We've seen effective use cases in Product User Groups content, Support Communities, and other types of User Generated content, like wikis and blogs.   If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!   Hitachi Data Systems Improves Global Web Experiences with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • SQL – NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer

    - by Pinal Dave
    I recently wrote a four-part series on how I started to learn about and begin my journey with NuoDB. Big Data is indeed a big world and the learning of the Big Data is like spaghetti – no one knows in reality where to start, so I decided to learn it with the help of NuoDB. You can download NuoDB and continue your journey with me as well. Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB …and in this blog post we will try to answer the most asked question about NuoDB. “I like the NuoDB Explorer but can I connect to NuoDB from my preferred Graphical User Interface?” Honestly, I did not expect this question to be asked of me so many times but from the question it is clear that we developers absolutely want to learn new things and along with that we do want to continue to use our most efficient developer tools. Now here is the answer to the question: “Absolutely, you can continue to use any of the following most popular SQL clients.” NuoDB supports the three most popular 3rd-party SQL clients. In all the leading development environments there are always more than one database installed and managing each of them with a different tool is often a very difficult task. Developers like to use one tool, which can control most of the databases. Once developers are familiar with one database tool it is very difficult for them to switch to another tool. This is particularly difficult when we developers find that tool to be the key reason for our efficiency. Let us see how to install each of the NuoDB supported 3rd party tools along with a quick tutorial on how to go about using them. SQuirreL SQL Client First download SQuirreL Universal SQL client. On the Windows platform you can double-click on the file and it will install the SQuirrel client. Once it is installed, open the application and it will bring up the following screen. Now go to the Drivers tab on the left side and scroll it down. You will find NuoDB mentioned there. Now right click over it and click on Modify Driver. Now here is where you need to make sure that you make proper entries or your client will not work with the database. Enter following values: Name: NuoDB Example URL: jdbc:com:nuodb://localhost:48004/test Website URL: http://www.nuodb.com Now click on the Extra Class Path tab and Add the location of the nuodbjdbc.jar file. If you are following my blog posts and have installed NuoDB in the default location, you will find the default path as C:\Program Files\NuoDB\jar\nuodbjdbc.jar. The class name of the driver is automatically populated. Once you click OK you will see that there is a small icon displayed to the left of NuoDB, which shows that you have successfully configured and installed the NuoDB driver. Now click on the tab of Alias tab and you can notice that it is empty. Now click on the big Plus icon and it will open screen of adding an alias. “Alias” means nothing more than adding a database to your system. The database name of the original installation can be anything and, if you wish, you can register the database with any other alternative name. Here are the details you should fill into the Alias screen below. Name: Test (or your preferred alias) Driver: NuoDB URL: jdbc:com:nuodb://localhost:48004/test (This is for test database) User Name: dba (This is the username which I entered for test Database) Password: goalie (This is the password which I entered for test Database) Check Auto Logon and Connect at Startup and click on OK. That’s it! You are done. On the right side you will see a table name and on the left side you will see various tabs with all the relevant details from respective table. You can see various metadata, schemas, data types and other information in the table. In addition, you can also generate script and do various important tasks related to database. You can see how easy it is to configure NuoDB with the SQuirreL Client and get going with it immediately. SQL Workbench/J This is another wonderful client tool, which works very well with NuoDB. The best part is that in the Driver dropdown you will see NuoDB being mentioned there. Click here to download  SQL Workbench/J Universal SQL client. The download process is straight forward and the installation is a very easy process for SQL Workbench/J. As soon as you open the client, you will notice on following screen the NuoDB driver when selecting a New Connection Profile. Select NuoDB from the drop down and click on OK. In the driver information, enter following details: Driver: NuoDB (com.nuodb.jdbc.Driver) URL: jdbc:com.nuodb://localhost/test Username: dba Password: goalie While clicking on OK, it will bring up the following pop-up. Click Yes to edit the driver information. Click on OK and it will bring you to following screen. This is the screen where you can perform various tasks. You can write any SQL query you want and it will instantly show you the results. Now click on the database icon, which you see right on the left side of the word User=dba.  Once you click on Database Explorer, you can perform various database related tasks. As a developer, one of my favorite tasks is to look at the source of the table as it gives me a proper view of the structure of the database. I find SQL Workbench/J very efficient in doing the same. DbVisualizer DBVisualizer is another great tool, which helps you to connect to NuoDB and retrieve database information in your desired format. A developer who is familiar with DBVisualizer will find this client to be very easy to work with. The installation of the DBVisualizer is very pretty straight forward. When we open the client, it will bring us to the following screen. As a first step we need to set up the driver. Go to Tools >> Driver Manager. It will bring up following screen where we set up the diver. Click on Create Driver and it will open up the driver settings on the right side. On the right side of the area where it displays Driver Settings please enter the following values- Name: NuoDB URL Format: jdbc:com.nuodb://localhost:48004/test Now under the driver path, click on the folder icon and it will ask for the location of the jar file. Provide the path as a C:\Program Files\NuoDB\jar\nuodbjdbc.jar and click OK. You will notice there is a green button displayed at the bottom right corner. This means the driver is configured properly. Once driver is configured properly, we can go to Create Database Connection and create a database. If the pop up show up for the Wizard. Click on No Wizard and continue to enter the settings manually. Here is the Database Connection screen. This screen can be bit tricky. Here are the settings you need to remember to enter. Name: NuoDB Database Type: Generic Driver: NuoDB Database URL: jdbc:com.nuodb://localhost:48004/test Database Userid: dba Database Password: goalie Once you enter the values, click on Connect. Once Connect is pressed, it will change the button value to Reconnect if the connection is successfully established and it will show the connection details on lthe eft side. When we further explore the NuoDB, we can see various tables created in our test application. We can further click on the right side screen and see various details on the table. If you click on the Data Tab, it will display the entire data of the table. The Tools menu also has some very interesting and cool features like Driver Manager, Data Monitor and SQL History. Summary Well, this was a relatively long post but I find it is extremely essential to cover all the three important clients, which we developers use in our daily database development. Here is my question to you? Which one of the following is your favorite NuoDB 3rd-Party Database Client? (Pick One) SQuirreL SQL Client SQL Workbench/J DbVisualizer I will be very much eager to read your experience about NuoDB. You can download NuoDB from here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Recover Data Like a Forensics Expert Using an Ubuntu Live CD

    - by Trevor Bekolay
    There are lots of utilities to recover deleted files, but what if you can’t boot up your computer, or the whole drive has been formatted? We’ll show you some tools that will dig deep and recover the most elusive deleted files, or even whole hard drive partitions. We’ve shown you simple ways to recover accidentally deleted files, even a simple method that can be done from an Ubuntu Live CD, but for hard disks that have been heavily corrupted, those methods aren’t going to cut it. In this article, we’ll examine four tools that can recover data from the most messed up hard drives, regardless of whether they were formatted for a Windows, Linux, or Mac computer, or even if the partition table is wiped out entirely. Note: These tools cannot recover data that has been overwritten on a hard disk. Whether a deleted file has been overwritten depends on many factors – the quicker you realize that you want to recover a file, the more likely you will be able to do so. Our setup To show these tools, we’ve set up a small 1 GB hard drive, with half of the space partitioned as ext2, a file system used in Linux, and half the space partitioned as FAT32, a file system used in older Windows systems. We stored ten random pictures on each hard drive. We then wiped the partition table from the hard drive by deleting the partitions in GParted. Is our data lost forever? Installing the tools All of the tools we’re going to use are in Ubuntu’s universe repository. To enable the repository, open Synaptic Package Manager by clicking on System in the top-left, then Administration > Synaptic Package Manager. Click on Settings > Repositories and add a check in the box labelled “Community-maintained Open Source software (universe)”. Click Close, and then in the main Synaptic Package Manager window, click the Reload button. Once the package list has reloaded, and the search index rebuilt, search for and mark for installation one or all of the following packages: testdisk, foremost, and scalpel. Testdisk includes TestDisk, which can recover lost partitions and repair boot sectors, and PhotoRec, which can recover many different types of files from tons of different file systems. Foremost, originally developed by the US Air Force Office of Special Investigations, recovers files based on their headers and other internal structures. Foremost operates on hard drives or drive image files generated by various tools. Finally, scalpel performs the same functions as foremost, but is focused on enhanced performance and lower memory usage. Scalpel may run better if you have an older machine with less RAM. Recover hard drive partitions If you can’t mount your hard drive, then its partition table might be corrupted. Before you start trying to recover your important files, it may be possible to recover one or more partitions on your drive, recovering all of your files with one step. Testdisk is the tool for the job. Start it by opening a terminal (Applications > Accessories > Terminal) and typing in: sudo testdisk If you’d like, you can create a log file, though it won’t affect how much data you recover. Once you make your choice, you’re greeted with a list of the storage media on your machine. You should be able to identify the hard drive you want to recover partitions from by its size and label. TestDisk asks you select the type of partition table to search for. In most cases (ext2/3, NTFS, FAT32, etc.) you should select Intel and press Enter. Highlight Analyse and press enter. In our case, our small hard drive has previously been formatted as NTFS. Amazingly, TestDisk finds this partition, though it is unable to recover it. It also finds the two partitions we just deleted. We are able to change their attributes, or add more partitions, but we’ll just recover them by pressing Enter. If TestDisk hasn’t found all of your partitions, you can try doing a deeper search by selecting that option with the left and right arrow keys. We only had these two partitions, so we’ll recover them by selecting Write and pressing Enter. Testdisk informs us that we will have to reboot. Note: If your Ubuntu Live CD is not persistent, then when you reboot you will have to reinstall any tools that you installed earlier. After restarting, both of our partitions are back to their original states, pictures and all. Recover files of certain types For the following examples, we deleted the 10 pictures from both partitions and then reformatted them. PhotoRec Of the three tools we’ll show, PhotoRec is the most user-friendly, despite being a console-based utility. To start recovering files, open a terminal (Applications > Accessories > Terminal) and type in: sudo photorec To begin, you are asked to select a storage device to search. You should be able to identify the right device by its size and label. Select the right device, and then hit Enter. PhotoRec asks you select the type of partition to search. In most cases (ext2/3, NTFS, FAT, etc.) you should select Intel and press Enter. You are given a list of the partitions on your selected hard drive. If you want to recover all of the files on a partition, then select Search and hit enter. However, this process can be very slow, and in our case we only want to search for pictures files, so instead we use the right arrow key to select File Opt and press Enter. PhotoRec can recover many different types of files, and deselecting each one would take a long time. Instead, we press “s” to clear all of the selections, and then find the appropriate file types – jpg, gif, and png – and select them by pressing the right arrow key. Once we’ve selected these three, we press “b” to save these selections. Press enter to return to the list of hard drive partitions. We want to search both of our partitions, so we highlight “No partition” and “Search” and then press Enter. PhotoRec prompts for a location to store the recovered files. If you have a different healthy hard drive, then we recommend storing the recovered files there. Since we’re not recovering very much, we’ll store it on the Ubuntu Live CD’s desktop. Note: Do not recover files to the hard drive you’re recovering from. PhotoRec is able to recover the 20 pictures from the partitions on our hard drive! A quick look in the recup_dir.1 directory that it creates confirms that PhotoRec has recovered all of our pictures, save for the file names. Foremost Foremost is a command-line program with no interactive interface like PhotoRec, but offers a number of command-line options to get as much data out of your had drive as possible. For a full list of options that can be tweaked via the command line, open up a terminal (Applications > Accessories > Terminal) and type in: foremost –h In our case, the command line options that we are going to use are: -t, a comma-separated list of types of files to search for. In our case, this is “jpeg,png,gif”. -v, enabling verbose-mode, giving us more information about what foremost is doing. -o, the output folder to store recovered files in. In our case, we created a directory called “foremost” on the desktop. -i, the input that will be searched for files. This can be a disk image in several different formats; however, we will use a hard disk, /dev/sda. Our foremost invocation is: sudo foremost –t jpeg,png,gif –o foremost –v –i /dev/sda Your invocation will differ depending on what you’re searching for and where you’re searching for it. Foremost is able to recover 17 of the 20 files stored on the hard drive. Looking at the files, we can confirm that these files were recovered relatively well, though we can see some errors in the thumbnail for 00622449.jpg. Part of this may be due to the ext2 filesystem. Foremost recommends using the –d command-line option for Linux file systems like ext2. We’ll run foremost again, adding the –d command-line option to our foremost invocation: sudo foremost –t jpeg,png,gif –d –o foremost –v –i /dev/sda This time, foremost is able to recover all 20 images! A final look at the pictures reveals that the pictures were recovered with no problems. Scalpel Scalpel is another powerful program that, like Foremost, is heavily configurable. Unlike Foremost, Scalpel requires you to edit a configuration file before attempting any data recovery. Any text editor will do, but we’ll use gedit to change the configuration file. In a terminal window (Applications > Accessories > Terminal), type in: sudo gedit /etc/scalpel/scalpel.conf scalpel.conf contains information about a number of different file types. Scroll through this file and uncomment lines that start with a file type that you want to recover (i.e. remove the “#” character at the start of those lines). Save the file and close it. Return to the terminal window. Scalpel also has a ton of command-line options that can help you search quickly and effectively; however, we’ll just define the input device (/dev/sda) and the output folder (a folder called “scalpel” that we created on the desktop). Our invocation is: sudo scalpel /dev/sda –o scalpel Scalpel is able to recover 18 of our 20 files. A quick look at the files scalpel recovered reveals that most of our files were recovered successfully, though there were some problems (e.g. 00000012.jpg). Conclusion In our quick toy example, TestDisk was able to recover two deleted partitions, and PhotoRec and Foremost were able to recover all 20 deleted images. Scalpel recovered most of the files, but it’s very likely that playing with the command-line options for scalpel would have enabled us to recover all 20 images. These tools are lifesavers when something goes wrong with your hard drive. If your data is on the hard drive somewhere, then one of these tools will track it down! Similar Articles Productive Geek Tips Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDUse an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveReset Your Ubuntu Password Easily from the Live CDBackup Your Windows Live Writer SettingsAdding extra Repositories on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • Filtering List Data with a jQuery-searchFilter Plugin

    - by Rick Strahl
    When dealing with list based data on HTML forms, filtering that data down based on a search text expression is an extremely useful feature. We’re used to search boxes on just about anything these days and HTML forms should be no different. In this post I’ll describe how you can easily filter a list down to just the elements that match text typed into a search box. It’s a pretty simple task and it’s super easy to do, but I get a surprising number of comments from developers I work with who are surprised how easy it is to hook up this sort of behavior, that I thought it’s worth a blog post. But Angular does that out of the Box, right? These days it seems everybody is raving about Angular and the rich SPA features it provides. One of the cool features of Angular is the ability to do drop dead simple filters where you can specify a filter expression as part of a looping construct and automatically have that filter applied so that only items that match the filter show. I think Angular has single handedly elevated search filters to first rate, front-row status because it’s so easy. I love using Angular myself, but Angular is not a generic solution to problems like this. For one thing, using Angular requires you to render the list data with Angular – if you have data that is server rendered or static, then Angular doesn’t work. Not all applications are client side rendered SPAs – not by a long shot, and nor do all applications need to become SPAs. Long story short, it’s pretty easy to achieve text filtering effects using jQuery (or plain JavaScript for that matter) with just a little bit of work. Let’s take a look at an example. Why Filter? Client side filtering is a very useful tool that can make it drastically easier to sift through data displayed in client side lists. In my applications I like to display scrollable lists that contain a reasonably large amount of data, rather than the classic paging style displays which tend to be painful to use. So I often display 50 or so items per ‘page’ and it’s extremely useful to be able to filter this list down. Here’s an example in my Time Trakker application where I can quickly glance at various common views of my time entries. I can see Recent Entries, Unbilled Entries, Open Entries etc and filter those down by individual customers and so forth. Each of these lists results tends to be a few pages worth of scrollable content. The following screen shot shows a filtered view of Recent Entries that match the search keyword of CellPage: As you can see in this animated GIF, the filter is applied as you type, displaying only entries that match the text anywhere inside of the text of each of the list items. This is an immediately useful feature for just about any list display and adds significant value. A few lines of jQuery The good news is that this is trivially simple using jQuery. To get an idea what this looks like, here’s the relevant page layout showing only the search box and the list layout:<div id="divItemWrapper"> <div class="time-entry"> <div class="time-entry-right"> May 11, 2014 - 7:20pm<br /> <span style='color:steelblue'>0h:40min</span><br /> <a id="btnDeleteButton" href="#" class="hoverbutton" data-id="16825"> <img src="images/remove.gif" /> </a> </div> <div class="punchedoutimg"></div> <b><a href='/TimeTrakkerWeb/punchout/16825'>Project Housekeeping</a></b><br /> <small><i>Sawgrass</i></small> </div> ... more items here </div> So we have a searchbox txtSearchPage and a bunch of DIV elements with a .time-entry CSS class attached that makes up the list of items displayed. To hook up the search filter with jQuery is merely a matter of a few lines of jQuery code hooked to the .keyup() event handler: <script type="text/javascript"> $("#txtSearchPage").keyup(function() { var search = $(this).val(); $(".time-entry").show(); if (search) $(".time-entry").not(":contains(" + search + ")").hide(); }); </script> The idea here is pretty simple: You capture the keystroke in the search box and capture the search text. Using that search text you first make all items visible and then hide all the items that don’t match. Since DOM changes are applied after a method finishes execution in JavaScript, the show and hide operations are effectively batched up and so the view changes only to the final list rather than flashing the whole list and then removing items on a slow machine. You get the desired effect of the list showing the items in question. Case Insensitive Filtering But there is one problem with the solution above: The jQuery :contains filter is case sensitive, so your search text has to match expressions explicitly which is a bit cumbersome when typing. In the screen capture above I actually cheated – I used a custom filter that provides case insensitive contains behavior. jQuery makes it really easy to create custom query filters, and so I created one called containsNoCase. Here’s the implementation of this custom filter:$.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; This filter can be added anywhere where page level JavaScript runs – in page script or a seperately loaded .js file.  The filter basically extends jQuery with a : expression. Filters get passed a tokenized array that contains the expression. In this case the m[3] contains the search text from inside of the brackets. A filter basically looks at the active element that is passed in and then can return true or false to determine whether the item should be matched. Here I check a regular expression that looks for the search text in the element’s text. So the code for the filter now changes to:$(".time-entry").not(":containsNoCase(" + search + ")").hide(); And voila – you now have a case insensitive search.You can play around with another simpler example using this Plunkr:http://plnkr.co/edit/hDprZ3IlC6uzwFJtgHJh?p=preview Wrapping it up in a jQuery Plug-in To make this even easier to use and so that you can more easily remember how to use this search type filter, we can wrap this logic into a small jQuery plug-in:(function($, undefined) { $.expr[":"].containsNoCase = function(el, i, m) { var search = m[3]; if (!search) return false; return new RegExp(search, "i").test($(el).text()); }; $.fn.searchFilter = function(options) { var opt = $.extend({ // target selector targetSelector: "", // number of characters before search is applied charCount: 1 }, options); return this.each(function() { var $el = $(this); $el.keyup(function() { var search = $(this).val(); var $target = $(opt.targetSelector); $target.show(); if (search && search.length >= opt.charCount) $target.not(":containsNoCase(" + search + ")").hide(); }); }); }; })(jQuery); To use this plug-in now becomes a one liner:$("#txtSearchPagePlugin").searchFilter({ targetSelector: ".time-entry", charCount: 2}) You attach the .searchFilter() plug-in to the text box you are searching and specify a targetSelector that is to be filtered. Optionally you can specify a character count at which the filter kicks in since it’s kind of useless to filter at a single character typically. Summary This is s a very easy solution to a cool user interface feature your users will thank you for. Search filtering is a simple but highly effective user interface feature, and as you’ve seen in this post it’s very simple to create this behavior with just a few lines of jQuery code. While all the cool kids are doing Angular these days, jQuery is still useful in many applications that don’t embrace the ‘everything generated in JavaScript’ paradigm. I hope this jQuery plug-in or just the raw jQuery will be useful to some of you… Resources Example on Plunker© Rick Strahl, West Wind Technologies, 2005-2014Posted in jQuery  HTML5  JavaScript   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Data Source Security Part 3

    - by Steve Felts
    In part one, I introduced the security features and talked about the default behavior.  In part two, I defined the two major approaches to security credentials: directly using database credentials and mapping WLS user credentials to database credentials.  Now it's time to get down to a couple of the security options (each of which can use database credentials or WLS credentials). Set Client Identifier on Connection When "Set Client Identifier" is enabled on the data source, a client property is associated with the connection.  The underlying SQL user remains unchanged for the life of the connection but the client value can change.  This information can be used for accounting, auditing, or debugging.  The client property is based on either the WebLogic user mapped to a database user using the credential map Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} or is the database user parameter directly from the getConnection() method, based on the “use database credentials” setting described earlier. To enable this feature, select “Set Client ID On Connection” in the Console.  See "Enable Set Client ID On Connection for a JDBC data source" http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableCredentialMapping.html in Oracle WebLogic Server Administration Console Help. The Set Client Identifier feature is only available for use with the Oracle thin driver and the IBM DB2 driver, based on the following interfaces. For pre-Oracle 12c, oracle.jdbc.OracleConnection.setClientIdentifier(client) is used.  See http://docs.oracle.com/cd/B28359_01/network.111/b28531/authentication.htm#i1009003 for more information about how to use this for auditing and debugging.   You can get the value using getClientIdentifier()  from the driver.  To get back the value from the database as part of a SQL query, use a statement like the following. “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL”. Starting in Oracle 12c, java.sql.Connection.setClientInfo(“OCSID.CLIENTID", client) is used.  This is a JDBC standard API, although the property values are proprietary.  A problem with setClientIdentifier usage is that there are pieces of the Oracle technology stack that set and depend on this value.  If application code also sets this value, it can cause problems. This has been addressed with setClientInfo by making use of this method a privileged operation. A well-managed container can restrict the Java security policy grants to specific namespaces and code bases, and protect the container from out-of-control user code. When running with the Java security manager, permission must be granted in the Java security policy file for permission "oracle.jdbc.OracleSQLPermission" "clientInfo.OCSID.CLIENTID"; Using the name “OCSID.CLIENTID" allows for upward compatible use of “select sys_context('USERENV','CLIENT_IDENTIFIER') from DUAL” or use the JDBC standard API java.sql.getClientInfo(“OCSID.CLIENTID") to retrieve the value. This value in the Oracle USERENV context can be used to drive the Oracle Virtual Private Database (VPD) feature to create security policies to control database access at the row and column level. Essentially, Oracle Virtual Private Database adds a dynamic WHERE clause to a SQL statement that is issued against the table, view, or synonym to which an Oracle Virtual Private Database security policy was applied.  See Using Oracle Virtual Private Database to Control Data Access http://docs.oracle.com/cd/B28359_01/network.111/b28531/vpd.htm for more information about VPD.  Using this data source feature means that no programming is needed on the WLS side to set this context; it is set and cleared by the WLS data source code. For the IBM DB2 driver, com.ibm.db2.jcc.DB2Connection.setDB2ClientUser(client) is used for older releases (prior to version 9.5).  This specifies the current client user name for the connection. Note that the current client user name can change during a connection (unlike the user).  This value is also available in the CURRENT CLIENT_USERID special register.  You can select it using a statement like “select CURRENT CLIENT_USERID from SYSIBM.SYSTABLES”. When running the IBM DB2 driver with JDBC 4.0 (starting with version 9.5), java.sql.Connection.setClientInfo(“ClientUser”, client) is used.  You can retrieve the value using java.sql.Connection.getClientInfo(“ClientUser”) instead of the DB2 proprietary API (even if set setDB2ClientUser()).  Oracle Proxy Session Oracle proxy authentication allows one JDBC connection to act as a proxy for multiple (serial) light-weight user connections to an Oracle database with the thin driver.  You can configure a WebLogic data source to allow a client to connect to a database through an application server as a proxy user. The client authenticates with the application server and the application server authenticates with the Oracle database. This allows the client's user name to be maintained on the connection with the database. Use the following steps to configure proxy authentication on a connection to an Oracle database. 1. If you have not yet done so, create the necessary database users. 2. On the Oracle database, provide CONNECT THROUGH privileges. For example: SQL> ALTER USER connectionuser GRANT CONNECT THROUGH dbuser; where “connectionuser” is the name of the application user to be authenticated and “dbuser” is an Oracle database user. 3. Create a generic or GridLink data source and set the user to the value of dbuser. 4a. To use WLS credentials, create an entry in the credential map that maps the value of wlsuser to the value of dbuser, as described earlier.   4b. To use database credentials, enable “Use Database Credentials”, as described earlier. 5. Enable Oracle Proxy Authentication, see "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. 6. Log on to a WebLogic Server instance using the value of wlsuser or dbuser. 6. Get a connection using getConnection(username, password).  The credentials are based on either the WebLogic user that is mapped to a database user or the database user directly, based on the “use database credentials” setting.  You can see the current user and proxy user by executing: “select user, sys_context('USERENV','PROXY_USER') from DUAL". Note: getConnection fails if “Use Database Credentials” is not enabled and the value of the user/password is not valid for a WebLogic Server user.  Conversely, it fails if “Use Database Credentials” is enabled and the value of the user/password is not valid for a database user. A proxy session is opened on the connection based on the user each time a connection request is made on the pool. The proxy session is closed when the connection is returned to the pool.  Opening or closing a proxy session has the following impact on JDBC objects. - Closes any existing statements (including result sets) from the original connection. - Clears the WebLogic Server statement cache. - Clears the client identifier, if set. -The WebLogic Server test statement for a connection is recreated for every proxy session. These behaviors may impact applications that share a connection across instances and expect some state to be associated with the connection. Oracle proxy session is also implicitly enabled when use-database-credentials is enabled and getConnection(user, password) is called,starting in WLS Release 10.3.6.  Remember that this only works when using the Oracle thin driver. To summarize, the definition of oracle-proxy-session is as follows. - If proxy authentication is enabled and identity based pooling is also enabled, it is an error. - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is false, then oracle-proxy-session is treated as true implicitly (it can also be explicitly true). - If a user is specified on getConnection() and identity-based-connection-pooling-enabled is true, then oracle-proxy-session is treated as false.

    Read the article

  • Can't remove GPT data from MBR

    - by user2373121
    I am having difficulty getting the Ubuntu installer (and gparted) to recognize the partitions on my MBR type disk. Other operating systems and disk tools read the disk structure and the files on it fine. I have used fixparts to write a new MBR but the issue persists. I assume the issue stems from the Protective MBR data still present on the disk but I am at a loss as to how to remove it while preserving my NTFS data partition. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixpartsfixparts 3: FixParts 0.8.8 Loading MBR data from 3: Warning: 0xEE partition doesn't start on sector 1. This can cause problems in some OSes. MBR command (? for help): Running gdisk shows Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixparts>gdisk 3: GPT fdisk (gdisk) version 0.8.7 Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by typing 'q' if you don't want to convert your MBR partitions to GPT format! *************************************************************** ************************************************************************ Most versions of Windows cannot boot from a GPT disk, and most varieties prior to Vista cannot read GPT disks. Therefore, you should exit now unless you understand the implications of converting MBR to GPT or creating a new GPT disk layout! ************************************************************************ Are you SURE you want to continue? (Y/N): y Command (? for help): p Disk 3:: 2930277168 sectors, 1.4 TiB Logical sector size: 512 bytes Disk identifier (GUID): BFE92CE8-F93D-4141-82B8-816AD06FB36E Partition table holds up to 128 entries First usable sector is 34, last usable sector is 2930277134 Partitions will be aligned on 2048-sector boundaries Total free space is 163846893 sectors (78.1 GiB) Number Start (sector) End (sector) Size Code Name 1 163842048 2930272255 1.3 TiB 0700 Microsoft basic data Command (? for help): r Recovery/transformation command (? for help): o Disk size is 2930277168 sectors (1.4 TiB) MBR disk identifier: 0x00000000 MBR partitions: Number Boot Start Sector End Sector Status Code 1 1 2930277167 primary 0xEE Recovery/transformation command (? for help): q

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >