Search Results

Search found 59142 results on 2366 pages for 'data annotation'.

Page 78/2366 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • What are the benefits vs costs of comment annotation in PHP?

    - by Patrick
    I have just started working with symfony2 and have run across comment annotations. Although comment annotation is not an inherent part of PHP, symfony2 adds support for this feature. My understanding of commenting is that it should make the code more intelligible to the human. The computer shouldn't care what is in comments. What benefits come from doing this type of annotation versus just putting a command in the normal PHP code? ie- /** * @Route("/{id}") * @Method("GET") * @ParamConverter("post", class="SensioBlogBundle:Post") * @Template("SensioBlogBundle:Annot:post.html.twig", vars={"post"}) * @Cache(smaxage="15") */ public function showAction(Post $post) { }

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Video: Analyzing Big Data using Oracle R Enterprise

    - by Sherry LaMonica
    Learn how Oracle R Enterprise is used to generate new insight and new value to business, answering not only what happened, but why it happened. View this YouTube Oracle Channel video overview describing how analyzing big data using Oracle R Enterprise is different from other analytics tools at Oracle. Oracle R Enterprise (ORE),  a component of the Oracle Advanced Analytics Option, couples the wealth of analytics packages in R with the performance, scalability, and security of Oracle Database. ORE executes base R functions transparently on database data without having to pull data from Oracle Database. As an embedded component of the database, Oracle R Enterprise can run your R script and open source packages via embedded R where the database manages the data served to the R engine and user-controlled data parallelism. The result is faster and more secure access to data. ORE also works with the full suite of in-database analytics, providing integrated results to the analyst.

    Read the article

  • Display large amount of data to client through pagination

    - by ebram tharwat
    I have a web application in which i need to show a big number of data or records for clients. Now i 'll use pagination but i was wondering should I: Load all the data once then pagination, sorting and sarching 'll be easy..But it 'll takes big time(using local DB it takes up to 9 sec.) Or each time i show new page(from the pagination) i make a new request to server and then new request to DB to get the next records..But then what if the client click on Prev button, i 'll make a new request to get data that I had previously..Should i cach data that are loaded before and how if that's good technique? So load all data once or make a new request every time i need data that maybe have been loaded before. I'm using ASP.NET MVC SPA with durandaljs and knockoutjs

    Read the article

  • 11g ???:Active Data Guard

    - by JaneZhang(???)
    ?Oracle 11g??,????(physical Standby)???redo???,???????,???mount??11g??,???redo???,????????read-only??,????Active Data Guard ???Active Data Guard,?????????????????,??????????????   Active Data Guard???????????,??,????????????,????????,????redo??,????????????,??????????? Oracle Active Data Guard ?Oracle Database Enterprise Edition?????,??????????????   ????Active Data Guard, ??????? read-only ????,???? ALTER DATABASE RECOVER MANAGED STANDBY DATABASE????????????:??????COMPATIBLE ????????11.0.0?  ???????Active Data Guard,???V$DATABASE????"READ ONLY WITH APPLY':      SQL> SELECT open_mode FROM V$DATABASE;      OPEN_MODE      --------------------      READ ONLY WITH APPLY   ????????????,???????real-time apply:   SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE; ?????????read-only????????:    • Issue SELECT statements, including queries that require multiple sorts that leverage TEMP segments    • Use ALTER SESSION and ALTER SYSTEM statements    • Use SET ROLE    • Call stored procedures    • Use database links (dblinks) to write to remote databases    • Use stored procedures to call remote procedures via dblinks    • Use SET TRANSACTION READ ONLY for transaction level read consistency    • Issue complex queries (such as grouping SET queries and WITH CLAUSE queries) ??????????read-only????????:    • Any DMLs (excluding simple SELECT statements) or DDLs    • Query accessing local sequences    • DMLs to local temporary tables    ?????Active Data Guard ??: • ????????????????? • ???Oracle Real Application Clusters (Oracle RAC) ,?????? • RAC???RAC??    Oracle Data Guard ?????,,????????:    * ?????????????????:     http://docs.oracle.com/cd/B28359_01/server.111/b28294/create_ps.htm   * ???Oracle Real Application Clusters (Oracle RAC) ,??????:     http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimarysingleinstance-131970.pdf   * RAC ???RAC ??:     http://www.oracle.com/technetwork/database/features/availability/maa-wp-10g-racprimaryracphysicalsta-131940.pdf  ??Active Data Guard???????,?????:    http://www.oracle.com/technetwork/database/features/availability/maa-wp-11gr1-activedataguard-1-128199.pdf     ??Oracle Maximum Availability Architecture Best Practices?????,???:   http://www.oracle.com/goto/maa

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • Pin animatesDrop mapView-iOS

    - by user1724168
    I have implemented code as seen below. I would like to add animation with dropping effect. However, once I type pinView.animatesDrop does not recognize! I could not able to figure out what I am doing wrong? - (MKAnnotationView *)mapView:(MKMapView *)mV viewForAnnotation:(id <MKAnnotation>)annotation { MKAnnotationView *pinView=nil; if(![annotation isKindOfClass:[Annotation class]]) // Don't mess user location return nil; static NSString *defaultPinID = @"StandardIdentifier"; pinView = (MKAnnotationView *)[self.mapView dequeueReusableAnnotationViewWithIdentifier:defaultPinID]; if (pinView == nil){ pinView = [[MKAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:defaultPinID]; } // Build our annotation if ([annotation isKindOfClass:[Annotation class]]) { Annotation *a = (Annotation *)annotation; pinView.image = [ZSPinAnnotation pinAnnotationWithColor:a.color];// ZSPinAnnotation Being Used pinView.annotation = a; pinView.enabled = YES; pinView.centerOffset=CGPointMake(6.5,-16); pinView.calloutOffset = CGPointMake(-11,0); //pinView.animatesDrop = YES; } pinView.canShowCallout = YES; UIButton *rightButton = [UIButton buttonWithType:UIButtonTypeDetailDisclosure]; [rightButton setTitle:annotation.title forState:UIControlStateNormal]; [pinView setRightCalloutAccessoryView:rightButton]; pinView.leftCalloutAccessoryView = [[UIView alloc] init]; pinView.leftCalloutAccessoryView=nil; /*UIButton *leftButton = [UIButton buttonWithType:UIButtonTypeInfoLight]; [leftButton setTitle:annotation.title forState:UIControlStateNormal]; [pinView setLeftCalloutAccessoryView:leftButton];*/ return pinView; }

    Read the article

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

  • Conceptual data modeling: Is RDF the right tool? Other solutions?

    - by paprika
    I'm planning a system that combines various data sources and lets users do simple queries on these. A part of the system needs to act as an abstraction layer that knows all connected data sources: the user shouldn't [need to] know about the underlying data "providers". A data provider could be anything: a relational DBMS, a bug tracking system, ..., a weather station. They are hooked up to the query system through a common API that defines how to "offer" data. The type of queries a certain data provider understands is given by its "offer" (e.g. I know these entities, I can give you aggregates of type X for relationship Y, ...). My concern right now is the unification of the data: the various data providers need to agree on a common vocabulary (e.g. the name of the entity "customer" could vary across different systems). Thus, defining a high level representation of the entities and their relationships is required. So far I have the following requirements: I need to be able to define objects and their properties/attributes. Further, arbitrary relations between these objects need to be represented: a verb that defines the nature of the relation (e.g. "knows"), the multiplicity (e.g. 1:n) and the direction/navigability of the relation. It occurs to me that RDF is a viable option, but is it "the right tool" for this job? What other solutions/frameworks do exist for semantic data modeling that have a machine readable representation and why are they better suited for this task? I'm grateful for every opinion and pointer to helpful resources.

    Read the article

  • Bogus InvalidOperationException (in a DataServiceRequestException)

    - by Andrei Rinea
    I am having a hard time with ADO.NET Data Services (formerly code-named Astoria) as it gives me a bogus exception when I try to insert a new entity from the silverlight client and trying in a clean project (the same code) doesn't. In both cases, however, data is correctly inserted into the database. Using Fiddler (an HTTP debugger I could see that there is no problem in the HTTP communication as I will show later in this question. The code : var ctx = new MyProject123Entities(new Uri("http://andreiri/MyProject.Data/Data.svc")); var i = new Zone() { Data = DateTime.Now, IdElement = 1 }; ctx.AddToZone(i); i.StareZone = new StareZone() { IdStareZone = 1 }; ctx.AttachTo("StareZone", i.StareZone); ctx.SetLink(i, "StareZone", i.StareZone); i.TipZone = new TipZone() { IdTipZone = 1 }; ctx.AttachTo("TipZone", i.TipZone); ctx.SetLink(i, "TipZone", i.TipZone); i.User = new User() { IdUser = 2 }; ctx.AttachTo("User", i.User); ctx.SetLink(i, "User", i.User); ctx.BeginSaveChanges(r =] ctx.EndSaveChanges(r), null); when run the last line (ctx.EndSaveChanges(r)) will throw the following exception : System.Data.Services.Client.DataServiceRequestException was unhandled by user code Message="An error occurred while processing this request." StackTrace: at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleBatchResponse() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.EndRequest() at System.Data.Services.Client.DataServiceContext.EndSaveChanges(IAsyncResult asyncResult) at MyProject.MainPage.[]c__DisplayClassd6.[]c__DisplayClassd8.[dashboard_PostZoneCurent]b__d5(IAsyncResult r) at System.Data.Services.Client.BaseAsyncResult.HandleCompleted() at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.HandleCompleted(PerRequest pereq) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndRead(IAsyncResult asyncResult) at System.IO.Stream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.AsyncEndGetResponse(IAsyncResult asyncResult) InnerException: System.InvalidOperationException Message="The context is already tracking a different entity with the same resource Uri." StackTrace: at System.Data.Services.Client.DataServiceContext.AttachTo(Uri identity, Uri editLink, String etag, Object entity, Boolean fail) at System.Data.Services.Client.MaterializeAtom.MoveNext() at System.Data.Services.Client.DataServiceContext.HandleResponsePost(ResourceBox entry, MaterializeAtom materializer, Uri editLink, String etag) at System.Data.Services.Client.DataServiceContext.SaveAsyncResult.[HandleBatchResponse]d__1d.MoveNext() InnerException: (there is no further information regarding the exception although the ADo.NET Data Service is configured to return detailed informations) However the row is inserted correctly and completely in the database. Using fiddler I can see that the request : <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <category scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" term="MyProject123Model.Zone" /> <title /> <updated>2009-09-11T13:36:46.917157Z</updated> <author> <name /> </author> <id /> <link href="http://andreiri/MyProject.Data/Data.svc/StareZone(1)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/TipZone(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" /> <link href="http://andreiri/MyProject.Data/Data.svc/User(4)" rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" /> <content type="application/xml"> <m:properties> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> <d:IdZone m:type="Edm.Int32">0</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> </m:properties> </content> </entry> is well accepted and a successful response is returned : HTTP/1.1 201 Created Date: Fri, 11 Sep 2009 13:36:47 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 DataServiceVersion: 1.0; Location: http://andreiri/MyProject.Data/Data.svc/Zone(75) Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Content-Length: 2213 <?xml version="1.0" encoding="utf-8" standalone="yes"?> <entry xml:base="http://andreiri/MyProject.Data/Data.svc/" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <id>http://andreiri/MyProject.Data/Data.svc/Zone(75)</id> <title type="text"></title> <updated>2009-09-11T13:36:47Z</updated> <author> <name /> </author> <link rel="edit" title="Zone" href="Zone(75)" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/CenterZone" type="application/atom+xml;type=feed" title="CenterZone" href="Zone(75)/CenterZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/ZoneMobil" type="application/atom+xml;type=feed" title="ZoneMobil" href="Zone(75)/ZoneMobil" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/StareZone" type="application/atom+xml;type=entry" title="StareZone" href="Zone(75)/StareZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/TipZone" type="application/atom+xml;type=entry" title="TipZone" href="Zone(75)/TipZone" /> <link rel="http://schemas.microsoft.com/ado/2007/08/dataservices/related/User" type="application/atom+xml;type=entry" title="User" href="Zone(75)/User" /> <category term="MyProject123Model.Zone" scheme="http://schemas.microsoft.com ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:IdZone m:type="Edm.Int32">75</d:IdZone> <d:X_Post m:type="Edm.Decimal">587647.4705</d:X_Post> <d:Y_Post m:type="Edm.Decimal">325783.077599999</d:Y_Post> <d:X_Repost m:type="Edm.Decimal" m:null="true" /> <d:Y_Repost m:type="Edm.Decimal" m:null="true" /> <d:Data m:type="Edm.DateTime">2009-09-11T16:36:40.588951+03:00</d:Data> <d:Detalii>aslkdfjasldkfj</d:Detalii> <d:IdElement m:type="Edm.Int32">1</d:IdElement> </m:properties> </content> </entry> Why do I get an exception? And, using this in a clean project does not throw the exception..

    Read the article

  • javafx tableview get selected data from ObservableList

    - by user3717821
    i am working on a javafx project and i need your help . while i am trying to get selected data from table i can get selected data from normal cell but can't get data from ObservableList inside tableview. code for my database: -- phpMyAdmin SQL Dump -- version 4.0.4 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: Jun 10, 2014 at 06:20 AM -- Server version: 5.1.33-community -- PHP Version: 5.4.12 SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; SET time_zone = "+00:00"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; -- -- Database: `test` -- -- -------------------------------------------------------- -- -- Table structure for table `customer` -- CREATE TABLE IF NOT EXISTS `customer` ( `col0` int(11) NOT NULL, `col1` varchar(255) DEFAULT NULL, `col2` int(11) DEFAULT NULL, PRIMARY KEY (`col0`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `customer` -- INSERT INTO `customer` (`col0`, `col1`, `col2`) VALUES (12, 'adasdasd', 231), (22, 'adasdasd', 231), (212, 'adasdasd', 231); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; my javafx codes: import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Map; import javafx.application.Application; import javafx.beans.property.SimpleStringProperty; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.TableCell; import javafx.scene.control.TableColumn; import javafx.scene.control.TableColumn.CellDataFeatures; import javafx.scene.control.TablePosition; import javafx.scene.control.TableView; import javafx.scene.control.TableView.TableViewSelectionModel; import javafx.scene.control.cell.ChoiceBoxTableCell; import javafx.scene.control.cell.TextFieldTableCell; import javafx.scene.layout.BorderPane; import javafx.stage.Stage; import javafx.util.Callback; import javafx.util.StringConverter; class DBConnector { private static Connection conn; private static String url = "jdbc:mysql://localhost/test"; private static String user = "root"; private static String pass = "root"; public static Connection connect() throws SQLException{ try{ Class.forName("com.mysql.jdbc.Driver").newInstance(); }catch(ClassNotFoundException cnfe){ System.err.println("Error: "+cnfe.getMessage()); }catch(InstantiationException ie){ System.err.println("Error: "+ie.getMessage()); }catch(IllegalAccessException iae){ System.err.println("Error: "+iae.getMessage()); } conn = DriverManager.getConnection(url,user,pass); return conn; } public static Connection getConnection() throws SQLException, ClassNotFoundException{ if(conn !=null && !conn.isClosed()) return conn; connect(); return conn; } } public class DynamicTable extends Application{ Object newValue; //TABLE VIEW AND DATA private ObservableList<ObservableList> data; private TableView<ObservableList> tableview; //MAIN EXECUTOR public static void main(String[] args) { launch(args); } //CONNECTION DATABASE public void buildData(){ tableview.setEditable(true); Callback<TableColumn<Map, String>, TableCell<Map, String>> cellFactoryForMap = new Callback<TableColumn<Map, String>, TableCell<Map, String>>() { @Override public TableCell call(TableColumn p) { return new TextFieldTableCell(new StringConverter() { @Override public String toString(Object t) { return t.toString(); } @Override public Object fromString(String string) { return string; } }); } }; Connection c ; data = FXCollections.observableArrayList(); try{ c = DBConnector.connect(); //SQL FOR SELECTING ALL OF CUSTOMER String SQL = "SELECT * from CUSTOMer"; //ResultSet ResultSet rs = c.createStatement().executeQuery(SQL); /********************************** * TABLE COLUMN ADDED DYNAMICALLY * **********************************/ for(int i=0 ; i<rs.getMetaData().getColumnCount(); i++){ //We are using non property style for making dynamic table final int j = i; TableColumn col = new TableColumn(rs.getMetaData().getColumnName(i+1)); if(j==1){ final ObservableList<String> logLevelList = FXCollections.observableArrayList("FATAL", "ERROR", "WARN", "INFO", "INOUT", "DEBUG"); col.setCellFactory(ChoiceBoxTableCell.forTableColumn(logLevelList)); tableview.getColumns().addAll(col); } else{ col.setCellValueFactory(new Callback<CellDataFeatures<ObservableList,String>,ObservableValue<String>>(){ public ObservableValue<String> call(CellDataFeatures<ObservableList, String> param) { return new SimpleStringProperty(param.getValue().get(j).toString()); } }); tableview.getColumns().addAll(col); } if(j!=1) col.setCellFactory(cellFactoryForMap); System.out.println("Column ["+i+"] "); } /******************************** * Data added to ObservableList * ********************************/ while(rs.next()){ //Iterate Row ObservableList<String> row = FXCollections.observableArrayList(); for(int i=1 ; i<=rs.getMetaData().getColumnCount(); i++){ //Iterate Column row.add(rs.getString(i)); } System.out.println("Row [1] added "+row ); data.add(row); } //FINALLY ADDED TO TableView tableview.setItems(data); }catch(Exception e){ e.printStackTrace(); System.out.println("Error on Building Data"); } } @Override public void start(Stage stage) throws Exception { //TableView Button showDataButton = new Button("Add"); showDataButton.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent event) { ObservableList<String> row = FXCollections.observableArrayList(); for(int i=1 ; i<=3; i++){ //Iterate Column row.add("asdasd"); } data.add(row); //FINALLY ADDED TO TableView tableview.setItems(data); } }); tableview = new TableView(); buildData(); //Main Scene BorderPane root = new BorderPane(); root.setCenter(tableview); root.setBottom(showDataButton); Scene scene = new Scene(root,500,500); stage.setScene(scene); stage.show(); tableview.getSelectionModel().selectedItemProperty().addListener(new ChangeListener() { @Override public void changed(ObservableValue observableValue, Object oldValue, Object newValue) { //Check whether item is selected and set value of selected item to Label if (tableview.getSelectionModel().getSelectedItem() != null) { TableViewSelectionModel selectionModel = tableview.getSelectionModel(); ObservableList selectedCells = selectionModel.getSelectedCells(); TablePosition tablePosition = (TablePosition) selectedCells.get(0); Object val = tablePosition.getTableColumn().getCellData(newValue); System.out.println("Selected Value " + val); System.out.println("Selected row " + newValue); } } }); } } please help me..

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Why "Algorithms" and "Data Structures" are treated as separate disciplines?

    - by Pavel Shved
    This question was the last straw; and I've been wondering for a long time about it, Why do people think about "Algorithms" and "Data structures" as about something that can be separated from each other? I see a lot of evidence that they're separated in programmers' minds. they request "Data Structures & Algorithms" books they refer to "Data Structures" and "Algorithms" as separate university courses they "know Algorithms", but are "weak in Data Structures" (can't find the link, sorry). etc. In my opinion "Data Structures" are algorithms, since the concept of "Data Structure" is about Algorithms to operate data that go in and out of the structures. But the opinion seems not a mainstream. What do I miss?

    Read the article

  • Writing annotataion schemas for Callisto

    - by Ken Bloom
    Does anybody know where I can find documentation on how to write annotation schemas for Callisto? I'm looking to write something a little more complicated than I can generate from a DTD -- that only gives me the ability to tag different kinds of text mentions. I'm looking to create a schema that represents a single type of relationship between five or six different kinds of textual mentions (and some of these types of mentions have attributes that I need to assign values to), and possibly having a second type of relationship between the first two instances of the first type of relationship. (Alternatively, does anybody know of any software that would be better for this kind of schema? I've been looking at WordFreak, but it's a little clumsy, and it doesn't support attributes on its textual mentions.)

    Read the article

  • What are the lesser known but cool data structures ?

    - by f3lix
    There a some data structures around that are really cool but are unknown to most programmers. Which are they? Everybody knows linked lists, binary trees, and hashes, but what about Skip lists, Bloom filters for example. I would like to know more data structures that are not so common, but are worth knowing because they rely on great ideas and enrich a programmer's tool box. PS: I am also interested on techniques like Dancing links which make interesting use of the properties of a common data structure. EDIT: Please try to include links to pages describing the data structures in more detail. Also, try to add a couple of words on why a data structures is cool (as Jonas Kölker already pointed out). Also, try to provide one data-structure per answer. This will allow the better data structures to float to the top based on their votes alone.

    Read the article

  • How to handle JPA annotations for a pointer to a generic interface

    - by HDave
    I have a generic class that is also a mapped super class that has a private field that holds a pointer to another object of the same type: @MappedSuperclass public abstract class MyClass<T extends MyIfc<T>> implements MyIfc<T> { @OneToOne() @JoinColumn(name = "previous", nullable = true) private T previous; ... } My problem is that Eclipse is showing an error in the file at the OneToOne "Target Entity "T" for previous is not an Entity." All of the implementations of MyIfc are, in fact, Entities. I should also add that each concrete implementation that inherit from MyClass uses a different value for T (because T is itself) so I can't use the "targetEntity" attribute. I guess if there is no answer then I'll have to move this JPA annotation to all the concrete subclasses of MyClass. It just seems like JPA/Hibernate should be smart enough to know it'll all work out at run-time. Makes me wonder if I should just ignore this error somehow.

    Read the article

  • Is there a standard practice for storing default application data?

    - by Rox Wen
    Our application includes a default set of data. The default data includes coefficients and other factors that are unlikely to ever change but still need to be update-able by the user. Currently, the original default data is stored as a populated class within the application. Data updates are stored to an external XML file. This design allows us to include a "reset" feature to restore the original default data. Our rationale for not storing defaults externally [e.g. XML file] was to minimize the risk of being altered. The overall volume of data doesn't warrant a database. Is there a standard practice for storing "default" application data?

    Read the article

  • Can a variable like 'int' be considered a primitive/fundamental data structure?

    - by Ravi Gupta
    A rough definition of a data structure is that it allows you to store data and apply a set of operations on that data while preserving consistency of data before and after the operation. However some people insist that a primitive variable like 'int' can also be considered as a data structure. I get that part where it allows you to store data but I guess the operation part is missing. Primitive variables don't have operations attached to them. So I feel that unless you have a set of operations defined and attached to it you cannot call it a data structure. 'int' doesn't have any operation attached to it, it can be operated upon with a set of generic operators. Please advise if I got something wrong here.

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by milomir.vojvodic
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialized partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue: Oracles London CITY Moorgate Offices Places are limited, Register from this Link Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.  Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day) For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected] and Milomir Vojvodic at [email protected] v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • What is the data transfer speeds within the disk, to other devices?

    - by Kumar
    I use Debian 6 on a HP Elitbook 6930 with 2Gigs of RAM. I was copying two AVI files, 1.5 GB in total, and noticed that the data copying was done at the rate of 4MB/sec. When I copy same AVIs to my Western Digital Passport 25G USB plugin drive the data transfer speed is 11+MB/sec. This speed is different if I plugin the drive to different USB ports. Interestingly, at work I also downloaded a 16MB IE8 exe on my virtual xp, run inside Oracle Sun Virtual Box, and it was downloaded AND saved within a second. We have a high speed network at work. :-) Why and how this difference in data speeds is possible?

    Read the article

  • What's the best way of handling permissions for apache2's user www-data in /var/www ?

    - by gyaresu
    Has anyone got a nice solution for handling files in /var/www/ ? We're running Name Based Virtual Hosts and the apache2 user is 'www-data' We've got two regular users & root. So when messing with files in /var/www ,rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question. How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Cheers.

    Read the article

  • LVM vs RAID0 vs RAID "linear" - Combine 2 disks as one, data recovery?

    - by leto
    Hi there, given two 2TB USB external disks that have to be combined to one 4TB volume and formatted with one big Filesystem (XFS), I have a small question to ask. Does LVM provide better Data recovery, should one disk be unplugged/damaged by being able to recover the data of the still working disk or is everything lost? I would appreciate a solution where only the data of one disk is lost and I can recover the content of the other with the usual filesystem/lvm/raid tools. Is that possible with LVM or RAID "linear"? This is for storing unimportant files that can be retrieved from backup, but I want to save time :) Thank you in advance

    Read the article

  • [ASP.NET 4.0] Persisting Row Selection in Data Controls

    - by HosamKamel
    Data Control Selection Feature In ASP.NET 2.0: ASP.NET Data Controls row selection feature was based on row index (in the current page), this of course produce an issue if you try to select an item in the first page then navigate to the second page without select any record you will find the same row (with the same index) selected in the second page! In the sample application attached: Select the second row in the books GridView. Navigate to second page without doing any selection You will find the second row in the second page selected. Persisting Row Selection: Is a new feature which replace the old selection mechanism which based on row index to be based on the row data key instead. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. Data Control Selection Feature In ASP.NET 3.5 SP1: The Persisting Row Selection was initially supported only in Dynamic Data projects Data Control Selection Feature In ASP.NET 4.0: Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature by setting the EnablePersistedSelection property, as shown below: Important thing to note, once you enable this feature you have to set the DataKeyNames property too because as discussed the full approach is based on the Row Data Key Simple feature but  is a much more natural behavior than the behavior in earlier versions of ASP.NET. Download Demo Project

    Read the article

  • Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications

    - by kimberly.billings
    To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle recently introduced the Oracle Communications Data Model (OCDM). With the OCDM, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the OCDM, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Hong Kong Broadband Network, the fastest growing and second largest broadband service provider in Hong Kong, enhanced its data warehouse using Oracle Communications Data Model. It went live with OCDM within three months, and has increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Read more about HKBN's use of OCDM. Read more about OCDM var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >