Search Results

Search found 59060 results on 2363 pages for 'dummy data'.

Page 58/2363 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • OBIEE 11.1.1 - Disable Wrap Data Types in WebLogic Server 10.3.x

    - by Ahmed Awan
    By default, JDBC data type’s objects are wrapped with a WebLogic wrapper. This allows for features like debugging output and track connection usage to be done by the server. The wrapping can be turned off by setting this value to false. This improves performance, in some cases significantly, and allows for the application to use the native driver objects directly. Tip: How to Disable Wrapping in WLS Administration Console You can use the Administration Console to disable data type wrapping for following JDBC data sources in bifoundation_domain domain: Data Source Name bip_datasource mds-owsm EPMSystemRegistry   To disable wrapping for each JDBC data source (as stated in above table): 1.     If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. 2.     In the Domain Structure tree, expand Services, then select Data Sources. 3.     On the Summary of Data Sources page, click the data source name for example “mds-owsm”. 4.     Select the Configuration: Connection Pool tab. 5.     Scroll down and click Advanced to show the advanced connection pool options. 6.     In Wrap Data Types, deselect the checkbox to disable wrapping. 7.     Click Save. 8.     To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Important Note: This change does not take effect immediately—it requires the server be restarted.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • First Look - Oracle Data Mining

    - by kimberly.billings
    In his blog, JT on EDM, James Taylor shares his analysis of Oracle Data Mining, including its new GUI and Exadata integration. While Oracle Data Mining has been available for a while, it is now easier to access and try via the Amazon Cloud. Using the Oracle 11gR2 Data Mining Amazon Machine Image (AMI), you can launch an Oracle Data Mining-enabled instance directly through Amazon Web Services (AWS) and connect to it using the Oracle Data Miner graphical user interface. The new Oracle Data Mining GUI, which will be available to beta customers soon, provides more graphics, the ability to define, save and share analytical "work flows" to solve business problems, and provides more automation and simplicity. Taylor comments that, "the UI looks to have a nice look and feel including graphical model development flows, easy access to the data, nice little micro graphs when browsing data records and more." On using Oracle Data Mining with Exadata, Taylor writes, "Oracle says that the use of the ODM routines in the Exadata kernel is faster than running a native ODM model in the database by a factor of 2 and that this increases as more joins are used. This could mean that ODM outperforms even third party in-database analytics." Taylor concludes his blog with a positive overall review, stating that "ODM is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." Read the blog. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • MMO Data Persistence Question

    - by JasonG
    I wanted to ask a question regarding data persistence strategies for an MMO. I have some experience in the games industry with social synchronous games. At Zynga, we stored static proto data in XML on both the client and the server and stored instance/runtime data in membase. For clarity sake, proto data for a Potion would be PotionName or MaxCharges, while runtime/instance data would be something like ChargesRemaining. So basically, if a player picks up a potion the instance is (via prediction) created from XML data on the client, the request gets sent to the server where the instance is created from XML and then added to membase. Is the same strategy that would be used for soemthing like an MMO? Would it be feasible to have static proto data in some kind of in-memory no-sql database on both client and server with instance data being stored on the server in a more enterprise level database? Or should all data (proto/instance) be stored on the server and the client gets everything from server? I know a lot of this might on certain game requirements, however, i'm basically looking for some general opinion/best practices here if there are any.

    Read the article

  • Podcast Show Notes: The Big Deal About Big Data

    - by Bob Rhubart
    This week the OTN ArchBeat kicks off a three-part series that looks at Big Data: what it is, its affect on enterprise IT, and what architects need to do to stay ahead of the big data curve. My guests for this conversation are Jean-Pierre Dijks and Andrew Bond . Jean-Pierre, based at Oracle HQ in Redwood Shores, CA, is product manager for Oracle Big Data Appliance and Oracle's big data strategy. Andrew Bond  is Head of Transformation Architecture for Oracle, where he specialzes in Data Warehousing, Business Intelligence, and Big Data. Andrew is based in the UK, but for this conversation he dialed in from a car somewhere on the streets of Amsterdam. Listen to Part 1What is Big Data, really, and why does it matter? Listen to Part 2 (Oct 10)What new challenges does Big Data present for Architects? What do architects need to do to prepare themselves and their environments? Listen to Part 3 (Oct 17)Who is driving the adoption of Big Data strategies in organizations, and why? Additional Resources http://blogs.oracle.com/datawarehousing http://www.facebook.com/pages/OracleBigData https://twitter.com/#!/OracleBigData Coming Soon A conversation about how the rapidly evolving enterprise IT landscape is transforming the roles, responsibilities, and skill requirements for architects and developers. Stay tuned: RSS

    Read the article

  • Sort algorithms that work on large amount of data

    - by Giorgio
    I am looking for sorting algorithms that can work on a large amount of data, i.e. that can work even when the whole data set cannot be held in main memory at once. The only candidate that I have found up to now is merge sort: you can implement the algorithm in such a way that it scans your data set at each merge without holding all the data in main memory at once. The variation of merge sort I have in mind is described in this article in section Use with tape drives. I think this is a good solution (with complexity O(n x log(n)) but I am curious to know if there are other (possibly faster) sorting algorithms that can work on large data sets that do not fit in main memory. EDIT Here are some more details, as required by the answers: The data needs to be sorted periodically, e.g. once in a month. I do not need to insert a few records and have the data sorted incrementally. My example text file is about 1 GB UTF-8 text, but I wanted to solve the problem in general, even if the file were, say, 20 GB. It is not in a database and, due to other constraints, it cannot be. The data is dumped by others as a text file, I have my own code to read this text file. The format of the data is a text file: new line characters are record separators. One possible improvement I had in mind was to split the file into files that are small enough to be sorted in memory, and finally merge all these files using the algorithm I have described above.

    Read the article

  • Announcing Oracle Receivables Generic Data Fix (GDF) for Refunds

    - by user793553
    Here's the first of what will be a series of Generic Data Fixes (GDF) to be released by Receivables Development. Generic Data Fix (GDF) are created by development to fix data issues caused by bugs/issues in the application code.  Other Generic Data Fix benefits/features include: Developed for bugs that can cause data issues. Provides a SELECT script that uses an identification/signature query to identify and report all data affected by issue/condition caused by a bug. Allow customers to view and modify what will be fixed. Provides a separate FIX script to fix the data reported by the SELECT script. The FIX script creates backup tables for the data that is fixed/updated. Available on My Oracle Support for download In Release 12, when creating a refund by either of the following methods: Applied a receipt to the Refund activity - which creates an Invoice in Payables Or you went directly into Payables to create a refund for an open Credit Memo in Receivables The Invoice in Payables that is associated to the refund is cancelled, the corresponding refund application or credit memo in Receivables is not properly re-instated. For the receipt application, it still remains applied to the Refund whereas this should be automatically unapplied. For the credit memo, it stays closed instead of getting re-opened. Doc ID 761993.1 includes the patch to make sure this doesn’t happen in the future as well as a GDF script to fix the current data (Script name: ar_std_refund_unapp.sql).  Download the script and run in READ_ONLY_MODE to identify 'refund' applications with this problem. Stay tuned for more GDF scripts coming soon...

    Read the article

  • Fetching data (responsebody) with a HttpClient in an AsyncTask and returning the data outside the As

    - by Peter Warbo
    Basically I'm wondering how I'm able to do what I've written in the topic. I've looked through many tutorials on AsyncTask but I can't get it to work. I have a little form (EditText) that will take what the user inputs there and make it to a url query for the application to lookup and then display the results. What I think would seem to work is something like this: In my main activity i have a string called responseBody. Then the user clicks on the search button it will go to my search function and from there call the GrabUrl method with the url which will start the asyncdata and when that process is finished the onPostExecute method will use the function activity.this.setResponseBody(content). This is what my code looks like simpliefied with the most important parts (I think). public class activity extends Activity { private String responseBody; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); initControls(); } public void initControls() { fieldSearch = (EditText) findViewById(R.id.EditText01); buttonSearch = (Button)findViewById(R.id.Button01); buttonSearch.setOnClickListener(new Button.OnClickListener() { public void onClick (View v){ search(); }}); } public void grabURL(String url) { new GrabURL().execute(url); } private class GrabURL extends AsyncTask<String, Void, String> { private final HttpClient client = new DefaultHttpClient(); private String content; private boolean error = false; private ProgressDialog dialog = new ProgressDialog(activity.this); protected void onPreExecute() { dialog.setMessage("Getting your data... Please wait..."); dialog.show(); } protected String doInBackground(String... urls) { try { HttpGet httpget = new HttpGet(urls[0]); ResponseHandler<String> responseHandler = new BasicResponseHandler(); content = client.execute(httpget, responseHandler); } catch (ClientProtocolException e) { error = true; cancel(true); } catch (IOException e) { error = true; cancel(true); } return content; } protected void onPostExecute(String content) { dialog.dismiss(); if (error) { Toast toast = Toast.makeText(activity.this, getString(R.string.offline), Toast.LENGTH_LONG); toast.setGravity(Gravity.TOP, 0, 75); toast.show(); } else { activity.this.setResponseBody(content); } } } public void search() { String query = fieldSearch.getText().toString(); String url = "http://example.com/example.php?query=" + query; //this is just an example url, I have a "real" url in my application but for privacy reasons I've replaced it grabURL(url); // the method that will start the asynctask processData(responseBody); // process the responseBody and display stuff on the ui-thread with the data that I would like to get from the asyntask but doesn't obviously }

    Read the article

  • Defining recursive algebraic data types in XML XSD

    - by Ben Challenor
    Imagine I have a recursive algebraic data type like this (Haskell syntax): data Expr = Zero | One | Add Expr Expr | Mul Expr Expr I'd like to represent this in XML, and I'd like an XSD schema for it. I have figured out how to achieve this syntax: <Expr> <Add> <Expr> <Zero/> </Expr> <Expr> <Mul> <Expr> <One/> </Expr> <Expr> <Add> <Expr> <One/> </Expr> <Expr> <One/> </Expr> </Add> </Expr> </Mul> </Expr> </Add> </Expr> with this schema: <xs:complexType name="Expr"> <xs:choice minOccurs="1" maxOccurs="1"> <xs:element minOccurs="1" maxOccurs="1" name="Zero" type="Zero" /> <xs:element minOccurs="1" maxOccurs="1" name="One" type="One" /> <xs:element minOccurs="1" maxOccurs="1" name="Add" type="Add" /> <xs:element minOccurs="1" maxOccurs="1" name="Mul" type="Mul" /> </xs:choice> </xs:complexType> <xs:complexType name="Zero"> <xs:sequence> </xs:sequence> </xs:complexType> <xs:complexType name="One"> <xs:sequence> </xs:sequence> </xs:complexType> <xs:complexType name="Add"> <xs:sequence> <xs:element minOccurs="2" maxOccurs="2" name="Expr" type="Expr" /> </xs:sequence> </xs:complexType> <xs:complexType name="Mul"> <xs:sequence> <xs:element minOccurs="2" maxOccurs="2" name="Expr" type="Expr" /> </xs:sequence> </xs:complexType> But what I really want is this syntax: <Add> <Zero/> <Mul> <One/> <Add> <One/> <One/> </Add> </Mul> </Add> Is this possible? Thanks!

    Read the article

  • using one data template in another data template in WPF

    - by Sowmya
    Hi, I have two data templates, one of which is the subset of another like below: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:igEditors="http://infragistics.com/Editors" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:controls="clr-namespace:Client.UI.WPF;assembly=Client.UI.WPF" xmlns:d="http://schemas.microsoft.com/expression/blend/2006" > <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="pack://application:,,,/Client.Resources.WPF.Styles;Component/Styles/CommonStyles.xaml"/> </ResourceDictionary.MergedDictionaries> <DataTemplate x:Key="XYZDataTemplate"> <Grid x:Name="_rootGrid" DataContext="{Binding DataContext}" HorizontalAlignment="Left" VerticalAlignment="Top"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <controls:ValueDisplay Grid.Row="0" Grid.Column="0" LabelText="Build number" x:Name="buildNumber" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="5,10,0,0"> <igEditors:XamTextEditor /> </controls:ValueDisplay> <controls:ValueDisplay Grid.Row="0" Grid.Column="1" LabelText="Tool version" x:Name="toolVersion" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="20,10,0,0"> <igEditors:XamTextEditor IsReadOnly="True"/> </controls:ValueDisplay> </Grid> </DataTemplate> and the other is like below: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:igEditors="http://infragistics.com/Editors" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:controls="clr-namespace:BHI.ULSS.Client.UI.WPF;assembly=ULSS.Client.UI.WPF" xmlns:d="http://schemas.microsoft.com/expression/blend/2006" > <DataTemplate x:Key="ABCDataTemplate" > <Grid x:Name="_rootGrid" DataContext="{Binding DataContext}" HorizontalAlignment="Left" VerticalAlignment="Top"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <controls:ValueDisplay Grid.Row="0" Grid.Column="0" LabelText="Build number" x:Name="buildNumber" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="5,10,0,0"> <igEditors:XamTextEditor /> </controls:ValueDisplay> <controls:ValueDisplay Grid.Row="0" Grid.Column="1" LabelText="Tool version" x:Name="toolVersion" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="20,10,0,0"> <igEditors:XamTextEditor IsReadOnly="True"/> </controls:ValueDisplay> <controls:ValueDisplay Grid.Row="0" Grid.Column="2" LabelText="Size" ShowUnit="True" x:Name="size" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="20,10,0,0"> <igEditors:XamTextEditor/> </controls:ValueDisplay> </Grid> </DataTemplate> XYZDataTemplate is a subset of the ABCDataTemplate as the first two fields in both the data templates are common, so I was wondering if it is possible to replace the redundant code in the ABCDataTemplate with that of the XYZDataTemplate for code maintainability? Could anyone please suggest if would this be a right approach, if so how can I acheive that? Thanks in advance, Sowmya

    Read the article

  • Where should I create my DbCommand instances?

    - by Domenic
    I seemingly have two choices: Make my class implement IDisposable. Create my DbCommand instances as private readonly fields, and in the constructor, add the parameters that they use. Whenever I want to write to the database, bind to these parameters (reusing the same command instances), set the Connection and Transaction properties, then call ExecuteNonQuery. In the Dispose method, call Dispose on each of these fields. Each time I want to write to the database, write using(var cmd = new DbCommand("...", connection, transaction)) around the usage of the command, and add parameters and bind to them every time as well, before calling ExecuteNonQuery. I assume I don't need a new command for each query, just a new command for each time I open the database (right?). Both of these seem somewhat inelegant and possibly incorrect. For #1, it is annoying for my users that I this class is now IDisposable just because I have used a few DbCommands (which should be an implementation detail that they don't care about). I also am somewhat suspicious that keeping a DbCommand instance around might inadvertently lock the database or something? For #2, it feels like I'm doing a lot of work (in terms of .NET objects) each time I want to write to the database, especially with the parameter-adding. It seems like I create the same object every time, which just feels like bad practice. For reference, here is my current code, using #1: using System; using System.Net; using System.Data.SQLite; public class Class1 : IDisposable { private readonly SQLiteCommand updateCookie = new SQLiteCommand("UPDATE moz_cookies SET value = @value, expiry = @expiry, isSecure = @isSecure, isHttpOnly = @isHttpOnly WHERE name = @name AND host = @host AND path = @path"); public Class1() { this.updateCookie.Parameters.AddRange(new[] { new SQLiteParameter("@name"), new SQLiteParameter("@value"), new SQLiteParameter("@host"), new SQLiteParameter("@path"), new SQLiteParameter("@expiry"), new SQLiteParameter("@isSecure"), new SQLiteParameter("@isHttpOnly") }); } private static void BindDbCommandToMozillaCookie(DbCommand command, Cookie cookie) { long expiresSeconds = (long)cookie.Expires.TotalSeconds; command.Parameters["@name"].Value = cookie.Name; command.Parameters["@value"].Value = cookie.Value; command.Parameters["@host"].Value = cookie.Domain; command.Parameters["@path"].Value = cookie.Path; command.Parameters["@expiry"].Value = expiresSeconds; command.Parameters["@isSecure"].Value = cookie.Secure; command.Parameters["@isHttpOnly"].Value = cookie.HttpOnly; } public void WriteCurrentCookiesToMozillaBasedBrowserSqlite(string databaseFilename) { using (SQLiteConnection connection = new SQLiteConnection("Data Source=" + databaseFilename)) { connection.Open(); using (SQLiteTransaction transaction = connection.BeginTransaction()) { this.updateCookie.Connection = connection; this.updateCookie.Transaction = transaction; foreach (Cookie cookie in SomeOtherClass.GetCookieArray()) { Class1.BindDbCommandToMozillaCookie(this.updateCookie, cookie); this.updateCookie.ExecuteNonQuery(); } transaction.Commit(); } } } #region IDisposable implementation protected virtual void Dispose(bool disposing) { if (!this.disposed && disposing) { this.updateCookie.Dispose(); } this.disposed = true; } public void Dispose() { this.Dispose(true); GC.SuppressFinalize(this); } ~Class1() { this.Dispose(false); } private bool disposed; #endregion }

    Read the article

  • Broken Multithreading With Core Data

    - by spamguy
    This is a better-focused version of an earlier question that touches upon an entirely different subject from before. I am working on a Cocoa Core Data application with multiple threads. There is a Song and Artist; every Song has an Artist relation. There is a delegate code file not cited here; it more or less looks like the template XCode generates. I am far better working with the former technology than the latter, and any multithreading capability came from a Core Data template. When I'm doing all my ManagedObjectContext work in one method, I am fine. When I put fetch-or-insert-then-return-object work into a separate method, the application halts (but does not crash) at the new method's return statement, seen below. The new method even gets its own MOC to be safe, and it has not helped any. The result is one addition to Song and a halt after generating an Artist. I get no errors or exceptions, and I don't know why. I've debugged out the wazoo. My theory is that any errors occurring are in another thread, and the one I'm watching is waiting on something forever. What did I do wrong with getArtistObject: , and how can I fix it? Thanks. - (void)main { NSInteger songCount = 1; NSManagedObjectContext *moc = [[NSManagedObjectContext alloc] init]; [moc setPersistentStoreCoordinator:[[self delegate] persistentStoreCoordinator]]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(contextDidSave:) name:NSManagedObjectContextDidSaveNotification object:moc]; /* songDict generated here */ for (id key in songDict) { NSManagedObject *song = [NSEntityDescription insertNewObjectForEntityForName:@"Song" inManagedObjectContext:moc]; [song setValue:[songDictItem objectForKey:@"Name"] forKey:@"title"]; [song setValue:[self getArtistObject:(NSString *) [songDictItem objectForKey:@"Artist"]] forKey:@"artist"]; [songDictItem release]; songCount++; } NSError *error; if (![moc save:&error]) [NSApp presentError:error]; [[NSNotificationCenter defaultCenter] removeObserver:self name:NSManagedObjectContextDidSaveNotification object:moc]; [moc release], moc = nil; [[self delegate] importDone]; } - (NSManagedObject*) getArtistObject:(NSString*)theArtist { NSError *error = nil; NSManagedObjectContext *moc = [[NSManagedObjectContext alloc] init]; [moc setPersistentStoreCoordinator:[[self delegate] persistentStoreCoordinator]]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(contextDidSave:) name:NSManagedObjectContextDidSaveNotification object:moc]; NSFetchRequest *fetch = [[[NSFetchRequest alloc] init] autorelease]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Artist" inManagedObjectContext:moc]; [fetch setEntity:entityDescription]; // object to be returned NSManagedObject *artistObject = [[NSManagedObject alloc] initWithEntity:entityDescription insertIntoManagedObjectContext:moc]; // set predicate (artist name) NSPredicate *pred = [NSPredicate predicateWithFormat:[NSString stringWithFormat:@"name = \"%@\"", theArtist]]; [fetch setPredicate:pred]; NSArray *response = [moc executeFetchRequest:fetch error:&error]; if (error) [NSApp presentError:error]; if ([response count] == 0) // empty resultset --> no artists with this name { [artistObject setValue:theArtist forKey:@"name"]; NSLog(@"%@ not found. Adding.", theArtist); return artistObject; } else return [response objectAtIndex:0]; } @end

    Read the article

  • TableView doesnt show any Data(CoreData) - App crashe

    - by brush51
    Hey @all, i cant read my data from my database. I have an app with a tabbarcontroller. in the first tab the iphone camera takes a picture from a barcode and send the result to another view (CameraReturnDetailViewController). In CameraReturnDetailViewController is the savebutton, and here is the code from this save button: - (IBAction)saveAndQuitScan:(id) sender { XLog(@"saveAndQuitScan button wurde geklickt!"); ProjectQRCodeAppDelegate *appDelegate = [[UIApplication sharedApplication]delegate]; NSManagedObjectContext *context = [appDelegate managedObjectContext]; NSManagedObject *newData; newData = [NSEntityDescription insertNewObjectForEntityForName:@"BarcodeDaten" inManagedObjectContext:context]; [newData setValue:dataLabel.text forKey:@"Barcode_CD"]; NSError *error; [context save:&error]; //Aktuelle ansicht (self) animiert verlassen [self dismissModalViewControllerAnimated:YES]; // Nachdem die ansicht verlassen wurde, // auf das zweite Tab wechseln(scanverlauf) /** TO DO - Funktioniert noch nicht **/ [self.tabBarController setSelectedIndex:1]; } Now, my aim is to show the taba in the second tab, in a TableView (ScansViewController): - (void)viewDidLoad { [super viewDidLoad]; if (managedObjectContext_ == nil) { managedObjectContext_ = [(ProjectQRCodeAppDelegate *)[[UIApplication sharedApplication]delegate] managedObjectContext]; NSLog(@"After managedObjectContext: %@", managedObjectContext_); } myTableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame] style:UITableViewStylePlain]; myTableView.delegate = self; myTableView.dataSource = self; myTableView.autoresizesSubviews = YES; self.navigationItem.title = @"Code Liste"; self.view = myTableView; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [itemsList count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } return cell; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *selectDay = [NSString stringWithFormat:@"%d", indexPath.row]; TableDetailViewController *fvController = [[TableDetailViewController alloc] initWithNibName:@"TableDetailViewController" bundle:[NSBundle mainBundle]]; fvController.selectDay = selectDay; [self.navigationController pushViewController:fvController animated:YES]; [fvController release]; fvController = nil; } - (void) configureCell:(UITableViewCell *)cell atIndexPath:(NSIndexPath *)indexPath { NSManagedObject *managedObject = [self.fetchedResultsController objectAtIndexPath:indexPath]; cell.textLabel.text = [[managedObject valueForKey:@"Barcode_CD"] description]; } - (NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController_ !=nil) { return fetchedResultsController_; } NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"BarcodeDaten" inManagedObjectContext:self.managedObjectContext]; [fetchRequest setEntity:entity]; [fetchRequest setFetchBatchSize:20]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"Barcode_CD" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:self.managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [aFetchedResultsController release]; [fetchRequest release]; [sortDescriptor release]; [sortDescriptors release]; NSError *error = nil; if (![fetchedResultsController_ performFetch:&error]) { XLog(@"Error: %@, %@", error, [error userInfo]); abort(); } return fetchedResultsController_; } At first i get this error when i choosed the second tab(ScansViewController): " Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '+entityForName: could not locate an NSManagedObjectModel for entity name 'BarcodeDaten'' " The Name is correct but i dont understand my mistake. No data is showed in the Tableview, why? Have I missed something..? Or something wrong? Thanks for help, brush51

    Read the article

  • UITableView Core Data reordering

    - by PCWiz
    I know this question has been asked before, and I took a look at the answer to this question. However, I'm still confused as to how to implement reordering with a UITableView in a Core Data project. What I know is that I need to have a "displayOrder" attribute in my Entity to track the order of items, and I need to enumerate through all the objects in the fetched results and set their displayOrder attribute. In the given code in the question I linked to, the table view delegate method calls a method like this [self FF_fetchResults];, and the code for that method is not given so its hard to tell what exactly it is. Is there any sample code that demonstrates this? That would be simpler to look at than sharing large chunks of code. Thanks

    Read the article

  • UITableView Core Data reordering

    - by PCWiz
    I know this question has been asked before, and I took a look at the answer to this question. However, I'm still confused as to how to implement reordering with a UITableView in a Core Data project. What I know is that I need to have a "displayOrder" attribute in my Entity to track the order of items, and I need to enumerate through all the objects in the fetched results and set their displayOrder attribute. In the given code in the question I linked to, the table view delegate method calls a method like this [self FF_fetchResults];, and the code for that method is not given so its hard to tell what exactly it is. Is there any sample code that demonstrates this? That would be simpler to look at than sharing large chunks of code. Thanks

    Read the article

  • How to use "SelectMany" with DataServiceQuery<>

    - by sako73
    I have the following DataServiceQuery running agaist an ADO Data Service (with the update installed to make it run like .net 4): DataServiceQuery<Account> q = (_gsc.Users .Where(c => c.UserId == myId) .SelectMany(c => c.ConsumerXref) .Select(x => x.Account) .Where(a => a.AccountName == "My Account" && a.IsActive) .Select(a => a)) as DataServiceQuery<Account>; When I run it, I get an exception: Cannot specify query options (orderby, where, take, skip) on single resource As far as I can tell, I need to use a version of "SelectMany" that includes an additonal lambda expression (http://msdn.microsoft.com/en-us/library/bb549040.aspx), but I am not able to get this to work correctly. Could someone show me how to properly structure the "SelectMany" call? Thank you for any help.

    Read the article

  • How to fix type names conflicts in Dynamic Data

    - by SDReyes
    Hi Guys! We're working in a Dynamic Data project that will handle entities coming from two different namespaces: myModel.Abby and myModel.Ben. whose classes are: Abby myModel.Abby.Car myModel.Abby.Lollipop Ben myModel.Ben.Car myModel.Ben.Apple So myModel.Abby.Car and myModel.Ben.Car are homonym. when I try to register both ObjectContext's, an exception is thrown telling us that there are type name conflicts between the mentioned classes (although the types belong to different namespaces). How can we overcome type-name conflicts, caused by repeated type names among different namespaces?

    Read the article

  • insert and modify a record in an entity using Core Data

    - by aminfar
    I tried to find the answer of my question on the internet, but I could not. I have a simple entity in Core data that has a Value attribute (that is integer) and a Date attribute. I want to define two methods in my .m file. First method is the ADD method. It takes two arguments: an integer value (entered by user in UI) and a date (current date by default). and then insert a record into the entity based on the arguments. Second method is like an increment method. It uses the Date as a key to find a record and then increment the integer value of that record. I don't know how to write these methods. (assume that we have an Array Controller for the table in the xib file)

    Read the article

  • Hitting a ADO.NET Data Services from WPF client, forms authentication

    - by Soulhuntre
    Hey all! There are a number of questiosn on StackOverflow that ALMOST hit this topic head on, but they are either for other technologies, reference obsolets information or don;t supply an answer that I can suss out. So pardon the almost duplication :) I have a working ADO.NET Data Service, and a WPF client that hits it. Now that they are working fine I want to add authentication / security to the system. My understanding of the steps so far is... Turn on forms authentication and configure it on the server (I have an existing asp.net membership service DB for other aspects of this app, so that isnt a problem) so that it is required for the service URL In WCF apply for and recieve a forms authentication "ticket" as part of a login routine Add that "ticket" to the headers of the ADO.NET service calls in WPF Profit! All well and good - but does anyone have a line on a soup to nuts code sample, using the modern releases of these technologies? Thanks!

    Read the article

  • Data Structures

    - by Phoenix
    There is a large stream of numbers coming in such as 5 6 7 2 3 1 2 3 .. What kind of data structure is suitable for this problem given the constraints that elements must be inserted in descending order and duplicates should be eliminated. I am not looking for any code just ideas? I was thinking of a Self-Balancing BST where we could add the condition that all nodes < current node on left and all nodes current node on right, this takes care of the duplicates .. but i don't think they are necessarily inserted in descending order. Any ideas what might be a better choice .. ofcourse it needs to be efficient time and space wise.

    Read the article

  • Memory efficient way of inserting an array of objects with Core Data

    - by randombits
    I'm working on a piece of code for an iPhone application that fetches a bunch of data from a server and builds objects from it on the client. It ends up creating roughly 40,000 objects. They aren't displayed to the user, I just need to create instances of NSManagedObject and store them to persistent storage. Am I wrong in thinking that the only way to do this is to create a single object, then save the context? is it best to create the objects all at once, then somehow save them to the context after they're created and stored in some set or array? If so, can one show some example code for how this is done or point me in the direction to code where this is done? The objects themselves are relatively straight forward models with string or integer attributes and don't contain any complex relationships.

    Read the article

  • wcf data service security configuration

    - by Daniel Pratt
    I'm in the process of setting up a WCF Data Services web service and I'm trying to sort out the security configuration. Although there's quite a lot of documentation out there for configuring WCF security, a lot of it seems to be outmoded or does not apply to my scenario. Ultimately, I am planning on managing authorization of operations via change interceptors. Thus, all I really need is the simplest way to permit a client to pass credentials along with a request and to be able to authenticate those credentials against either AD or an ASP.NET membership provider (I'd much prefer the latter unless it makes things much more complicated). I'm intending to manage encryption at the transport level (i.e. HTTPS). I'm hoping that the eventual solution does not involve a huge web.config. Likewise, I'd much prefer to avoid writing custom code for the purpose of authentication.

    Read the article

  • Schema compare with MS Data Tools in VS2008

    - by rdkleine
    When performing a schema compare having db_owner rights on the target database results in the following error: The user does not have permission to perform this action. Using the SQL Server Profiler I figured out this error occurs executing a query targeting the master db view: [sys].[dm_database_encryption_keys] While specifically ignoring all object types but Tables one would presume the SQL Compare doesn't need access to the db encryption keys. Also note: http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/c11a5f8a-b9cc-454f-ba77-e1c69141d64b/ One solution would be to GRANT VIEW SERVER STATE to the db user, but in my case I'm not hosting the database services and won't get the rights to the server state. Also tried excluding DatabaseEncryptionKey element in the compare file. <PropertyElementName> <Name>Microsoft.Data.Schema.Sql.SchemaModel.SqlServer.ISql100DatabaseEncryptionKey</Name> <Value>ExcludedType</Value> </PropertyElementName> Anyone has an workaround this?

    Read the article

  • Time and date dimension in data warehouse

    - by peperg
    I'm buildind an data warehouse. Each fact has it's timestamp. I need to create reports by day, month, quater but by hours too. Loking at the examples I see that dates tend to be saved in dimension tabels. But I think, that it makes no sense for time. The dimension table would grow and grow. On the other hand JOIN with date dimension table is more efficent than using date/time functions in SQL. What are your opinions/solutions ? (I'm using Infobright)

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >