Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 42/3549 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Excel Macro Help - Data Input

    - by B-Ballerl
    I'm want to develop a macro where in my excel worksheet I type a date in a specific cell, and the macro will go into a folder containing text files. A database you could say. I want it to find the corresponding file name which is written as a date, put the data through a delimeter, and paste into the cells directly below where I orginally put the date. I'm very new with Macro's so if you must answer try to be a little more simple than you might usually be. Thanks In Advance if anyone can Help!!

    Read the article

  • How to recover data from a partially overwritten partition

    - by shredder12
    By mistake, I configured a 900GB partition to be part of a 50GB raid. The sync is complete and my understanding is that only the first 50GB of the bigger partition is overwritten. How do I recover the rest of the data? When I try to mount this partition by identifying it as ext3, it mounts only the 50GB overwritten space. This partition was earlier divided into various logical volumes(all ext3 filesystems) through LVM. Any suggestions?

    Read the article

  • Safely reboot prior to recovering data

    - by ELO
    What is the safest (without additional writing to the disk) way to power down computer whose deleted files you want to recover in order to boot from rescue medium? In case of a desktop computer, plugging off the power cord looks like the most direct solution, but are there possible side-effects, apart from losing unsaved data? More problematic seems the laptop, with removing the battery being the equivalent, but is it a good idea overall?

    Read the article

  • Using www-data through SSH

    - by Fluidbyte
    For development purposes I'm using www-data (on an ubuntu 11.10 server) to ssh in and fire git commands and basic stuff against the webroot. I don't have things like command history, coloring, etc like I do when I ssh in as any other user, so I'm curious how to get this working. I'm assuming I need a `.bashrc' file, but I'm not sure what to include or (more importantly since I could just copy the one from another user) where it goes.

    Read the article

  • Should I keep investing into data structures and algorithms?

    - by Chiron
    These days, I'm investing heavily in data structures and algorithms and trying to solve some programming puzzles. I'm trying to code and solve with Java and Clojure. Am I wasting my time? should I invest more in technologies and frameworks that I already know in order to gain deeper knowledge (the ins and the outs) and be able to code with them more quickly? By studying data structures and algorithms, am I going to become a better programmer or those subjects are only important during college years?

    Read the article

  • how to recover data from my disk - I accidentally copied with dd an iso on it

    - by sijoune
    I wanted to create a bootable usb from an iso image and i accidentally put as the output of the dd, instead of my usb drive, one of my hard disks. The iso was 3,3 GB and my disk is 1TB! And it was almost full. Can i at least restore the data that has not been overwritten? Right now i can't even mount it. I get this error: Error mounting /dev/sdd1 at /media/main/UDF Volume: Command-line `mount -t "udf" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,iocharset=utf8,umask=0077" "/dev/sdd1" "/media/main/UDF Volume"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Also since i know which filesystem my disk used if i reformat it to this filesystem is there any chance i can mount it and retrieve the rest of the files?

    Read the article

  • Using Google Tag Manager to define the page type

    - by Daffy
    So, I am looking to add a tag that I want to use for A/B testing, however we don't have a page-type URL structure. Fortunately the tool can recognise page type if I pass it by Javascript. <script type="text/javascript"> window.isProductPage = true; </script> I have been told to use the above, I have created the script in Google Tag Manager (GTM), however I now need to know how to make this run on those pages in GTM. I have looked through the code and there are div class that are unique to each page, can I use this as an indication of page type?

    Read the article

  • can javascript process binary data?

    - by Johnny
    admit me describe my questions in situation-oriented way: assume IE is still the dominate web browser(the firefox have document for binary processing): the XMLHttpRequest.responseText or XMLHttpRequest.responseXML in ie desire txt or xml/xhtml/html,but what about the server response the xmlHttprequest whith MIME TYPE application/octet ? would the response string all little than 256 ?(every char of that string < 256), thanks very much for a straight answer, i have no webserver env,so i don't know how to test it out. because use txt or xml have a issue of character set encode, and i don't know how to process #[[[CDDATA node of one encoded xml(ex : utf-8,ascii,gb18030) with javascript, when i getNodeText, does the docObj return me byte or decoded char ? if it was decoded char which according to the header indicated charSet in the httpresponse , it would be all wrong. to avoid mess up with charSet ,i would like the server to response octet data and force strings data to be encoded as utf-8 but another charSet in the binary format. if the response is octal, so i guess the browser would not try to decode the response"txt" does this weird? or miss understanding the fundamental things? EDIT: I believe the question is asking this: Can Javascript safely process strings that aren't encoded in Unicode? What are the problems with trying to do so? EDIT: no no no , i means if http-header: content-type is "application/octet" , would the ie try to decoded it as (16bits Unicode | ie local setting charset ) when i get XMLHttpRequestobj.responseText use javascript ? or it(ie) just wrap every single byte of the response body as a javascript string, then every char in that string little than or equal 256 (char<=256), am i talking Mars language? sadly, if i were Marsizen,i would come as tourist without fuzzy questions. however i am in a country which share at least one property with Mars : RED

    Read the article

  • How do I implement IDataServiceMetadataProvider and tell my Data Service to use that custom provider

    - by Pwninstein
    There's no obvious entry point for implementing a custom provider for an ADO.NET Data Service using IDataServiceMetadataProvider, and then telling a Data Service to use that provider. Has anyone had any luck in this area? I've tried implementing this interface on my Data Source class, but none of my breakpoints are hit. There is also no (obvious) way to set the provider from the Data Service's DataServiceConfiguration parameter passed in to the InitializeService function. Any help would be appreciated. Thanks! Data Services Providers (ADO.NET Data Services) IDataServiceMetadataProvider Members

    Read the article

  • Decode received multipart/form-data request in Cocoa

    - by Snej
    Hi: I wonder if there is any possibility to explicitly decode an incoming multipart/form-data POST request. Is there any lib to handle this safely? Several files are embedded in this request and I want to save these files individually. NSData *data = [(id)CFHTTPMessageCopyBody(request) autorelease]; Content-Type: multipart/form-data; boundary=0xKhTmLbOuNdArY The data content is: --0xKhTmLbOuNdArY Content-Disposition: form-data; name="file1"; filename="fileName1.extension" Content-Type: application/octet-stream; charset=utf-8 ......... --0xKhTmLbOuNdArY Content-Disposition: form-data; name="file2"; filename="fileName2.extension" Content-Type: application/octet-stream; charset=utf-8 ......... --0xKhTmLbOuNdArY--

    Read the article

  • JQuery Datatable Question: Centering column data after data insertion

    - by Chris
    I have a data table that is initially empty and is populated after a particular Javascript call. After the data is inserted into the table, I'd like to center all of the data in one of the columns. I tried specifying this at the initialization step in this way: dTable = $('#dt').datatable({ 'aoColumns': [ null, null, { "sClass" : "center" }] }); The data in the third column was not centered after the insertions were complete. I tried modifying aoColumns after the insertions and redrawing the table as well: dTable.fnSettings().aoColumns[2].sClass = "center"; dTable.fnDraw(); This did not work either. So my question is simply how should I go about telling the data table to center the data in the third column? Thanks in advance for your suggestions. Chris

    Read the article

  • using document.createDocumentFragment() child dom elements that contain jQuery.data

    - by taber
    I want to use document.createDocumentFragment() to create an optimized collection of HTML elements that contain ".data" coming from jQuery (v 1.4.2), but I'm kind of stuck on how to get the data to surface from the HTML elements. Here's my code: var genres_html = document.createDocumentFragment(); $(xmlData).find('genres').each(function(i, node) { var genre = document.createElement('a'); $(genre).addClass('button') .attr('href', 'javascript:void(0)') .html( $(node).find('genreName:first').text() ) .data('genreData', { id: $(node).find('genreID:first').text() }); genres_html.appendChild( genre.cloneNode(true) ); }); $('#list').html(genres_html); // error: $('#list a:first').data('genreData') is null alert($('#list a:first').data('genreData').id); What am I doing wrong here? I suspect it's probably something with .cloneNode() not carrying over the data when the element is appended to the documentFragment. Sometimes there are tons of rows so I want to keep things pretty optimized, speed-wise. Thanks!

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Data Model Evolution

    - by redleafong
    Hey guys, When writing code I am seeing requirements to change data models (e.g. adding/changing/removing data members from a class). When these data models belong to an interface, it seems difficult to change without breaking the existing client codes. So I am wondering if there is any best practice of designing interfaces/data models in a way to minimize the impact during evolution. The closest thing I can find from google is data contract versioning. But that seems to be a .net specific topic. I am wondering if the same practice applies to the Java world, or there is a different or generic way to deal with data model evolution. Thanks

    Read the article

  • should jQuery data be chainable?

    - by pedalpete
    I'm trying to add multiple jQuery data entries to a single element. I suspected that the following would work jQuery('td.person#a'+personId).data('email',thisPerson.email).data('phone',thisPerson.phone); However, I am getting nothing but errors when I do this. jQuery('td.person#a'+personId).data('email',thisPerson.email); jQuery('td.person#a'+personId).data('phone',thisPerson.phone); is there another way to get more than one data entry on an element? Hopefully chained?

    Read the article

  • Using protocol buffers for a comprehensive data strategy for Windows Mobile devices

    - by Steve
    I have started reading some of the posts related to protocol buffers. The serialization method seems very appropriate for the transfer of data to and from web servers. Has anyone considered using a method like this to save and retrieve data on the mobile device itself? (i.e. a replacement for a traditional database / orm layer) Where would the data be persisted? How would the data be queried? Would it make sense to store the data in a traditional database (SqlCE or SqlLite) with a few "searchable" columns and then one column for the serialized data? Thoughts? Am I out on a limb here? Thank you!

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • Entity data field validation and part data submission

    - by pradeeptp
    I have an entity class that has 10 fields. I am using MS Validation Application block to mark all fields as mandatory (IsRequired). I am implementing a securiy feature in which during updation of the data, not all the fields in the entity class will have data. For example a few users can only view 5 fileds while others all 10 fields during updation on GUI I have the following options 1) Bring all the data for all the fields from the DB table and hide the ones not accessible to the users in GUI. I am concerned about the performance because everytime GUI will pull unncessary data. 2) Bring the data (e.g. only 5 fields) that are permissible for access/view by the user on GUI. During submit, validation block will throw exception because all fields are marked as IsRequired and only data for 5 fields are sent back to the server. I want to know if there are any other good approaches to solve problems like this. I am using .NET 3.5 Thanks.

    Read the article

  • Real-Time Data Streaming to Multiple Clients

    - by AriX
    Hi all, I would like to write an application which will stream data at 2400 baud over the internet from a server to multiple clients. The data will be the same for each client, and it would probably be fine to send it as a UDP stream, since exact data accuracy is not a 100% necessity, as there are checksums built-in to the data format and the data will be sent repeatedly on a loop. What is the best way to do this? I would want to write the server in C, but I don't know how to best multicast this data to the different clients that would be receiving it all over the country. I'm sure this seems like a pretty draconian way to go about my project, as opposed to just using some sort of fetch command, but I'd prefer to do it this way if possible.

    Read the article

  • Data driven charts and graphs from xml to svg

    - by garymlewis
    I asked this question a week ago, but did not do a good job of describing the problem. Here's a second attempt. I'd like to produce data-driven charts, graphs, and other data visualizations, starting with data in an xml database and ending up with the visualizations as SVG. Here's an example from the W3C. It uses Javascript to create a stacked bar chart as SVG from xml. I'd like to do something similar but use a graphics library (or ???) instead of js to handle the construction of axes, labels, titles, data points, etc. My question, then: what are the options that I should consider ... things like Raphael I suppose, but initially I'd like to cast a wide net and look at many different options. My experience is all with static data visualizations using statistics packages like R, but eventually I'd like to create interactive data visualizations with html5/css3/svg. Any help would be much appreciated. Thanks.

    Read the article

  • How to Modify Data Security in Fusion Applications

    - by Elie Wazen
    The reference implementation in Fusion Applications is designed with built-in data security on business objects that implement the most common business practices.  For example, the “Sales Representative” job has the following two data security rules implemented on an “Opportunity” to restrict the list of Opportunities that are visible to an Sales Representative: Can view all the Opportunities where they are a member of the Opportunity Team Can view all the Opportunities where they are a resource of a territory in the Opportunity territory team While the above conditions may represent the most common access requirements of an Opportunity, some customers may have additional access constraints. This blog post explains: How to discover the data security implemented in Fusion Applications. How to customize data security Illustrative example. a.) How to discover seeded data security definitions The Security Reference Manuals explain the Function and Data Security implemented on each job role.  Security Reference Manuals are available on Oracle Enterprise Repository for Oracle Fusion Applications. The following is a snap shot of the security documented for the “Sales Representative” Job. The two data security policies define the list of Opportunities a Sales Representative can view. Here is a sample of data security policies on an Opportunity. Business Object Policy Description Policy Store Implementation Opportunity A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team Role: Opportunity Territory Resource Duty Privilege: View Opportunity (Data) Resource: Opportunity A Sales Representative can view opportunity where they are an opportunity sales team member with view, edit, or full access Role: Opportunity Sales Representative Duty Privilege: View Opportunity (Data) Resource: Opportunity Description of Columns Column Name Description Policy Description Explains the data filters that are implemented as a SQL Where Clause in a Data Security Grant Policy Store Implementation Provides the implementation details of the Data Security Grant for this policy. In this example the Opportunities listed for a “Sales Representative” job role are derived from a combination of two grants defined on two separate duty roles at are inherited by the Sales Representative job role. b.) How to customize data security Requirement 1: Opportunities should be viewed only by members of the opportunity team and not by all the members of all the territories on the opportunity. Solution: Remove the role “Opportunity Territory Resource Duty” from the hierarchy of the “Sales Representative” job role. Best Practice: Do not modify the seeded role hierarchy. Create a custom “Sales Representative” job role and build the role hierarchy with the seeded duty roles. Requirement 2: Opportunities must be more restrictive based on a custom attribute that identifies if a Opportunity is confidential or not. Confidential Opportunities must be visible only the owner of the Opportunity. Solution: Modify the (2) data security policy in the above example as follows: A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team and the opportunity is not confidential. Implementation of this policy is more invasive. The seeded SQL where clause of the data security grant on “Opportunity Territory Resource Duty” has to be modified and the condition that checks for the confidential flag must be added. Best Practice: Do not modify the seeded grant. Create a new grant with the modified condition. End Date the seeded grant. c.) Illustrative Example (Implementing Requirement 2) A data security policy contains the following components: Role Object Instance Set Action Of the above four components, the Role and Instance Set are the only components that are customizable. Object and Actions for that object are seed data and cannot be modified. To customize a seeded policy, “A Sales Representative can view opportunity where they are a territory resource in the opportunity territory team”, Find the seeded policy Identify the Role, Object, Instance Set and Action components of the policy Create a new custom instance set based on the seeded instance set. End Date the seeded policies Create a new data security policy with custom instance set c-1: Find the seeded policy Step 1: 1. Find the Role 2. Open 3. Find Policies Step 2: Click on the Data Security Tab Sort by “Resource Name” Find all the policies with the “Condition” as “where they are a territory resource in the opportunity territory team” In this example, we can see there are 5 policies for “Opportunity Territory Resource Duty” on Opportunity object. Step 3: Now that we know the policy details, we need to create new instance set with the custom condition. All instance sets are linked to the object. Find the object using global search option. Open it and click on “condition” tab Sort by Display name Find the Instance set Edit the instance set and copy the “SQL Predicate” to a notepad. Create a new instance set with the modified SQL Predicate from above by clicking on the icon as shown below. Step 4: End date the seeded data security policies on the duty role and create new policies with your custom instance set. Repeat the navigation in step Edit each of the 5 policies and end date them 3. Create new custom policies with the same information as the seeded policies in the “General Information”, “Roles” and “Action” tabs. 4. In the “Rules” tab, please pick the new instance set that was created in Step 3.

    Read the article

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • How to infer the type of a derived class in base class?

    - by enzi
    I want to create a method that allows me to change arbitrary properties of classes that derive from my base class, the result should look like this: SetPropertyValue("size.height", 50); – where size is a property of my derived class and height is a property of size. I'm almost done with my implementation but there's one final obstacle that I want to solve before moving on, to describe this I will first have to explain my implementation a bit: Properties that can be modified are decorated with an attribute There's a method in my base class that searches for all derived classes and their decorated properties For each property I generate a "property modifier", a class that contains 2 delegates: one to set and one to get the value of the property. Property Modifiers are stored in a dictionary, with the name of the property as key In my base class, there is another dictionary that contains all property-modifier-dictionaries, with the Type of the respective class as key. What the SetPropertyValue method does is this: Get the correct property-modifier-dictionary, using the concrete type of the derived class (<- yet to solve) Get the property modifier of the property to change (e.g. of the property size) Use the get or set delegate to modify the property's value Some example code to clarify further: private static Dictionary<RuntimeTypeHandle, object> EditableTypes; //property-modifier-dictionary protected void SetPropertyValue<T>(EditablePropertyMap<T> map, string property, object value) { var property = map[property]; // get the property modifier property.Set((T)this, value); // use the set delegate (encapsulated in a method) } In the above code, T is the Type of the actual (derived) class. I need this type for the get/set delegates. The problem is how to get the EditablePropertyMap<T> when I don't know what T is. My current (ugly) solution is to pass the map in an overriden virtual method in the derived class: public override void SetPropertyValue(string property, object value) { base.SetPropertyValue((EditablePropertyMap<ExampleType>)EditableTypes[typeof(ExampleType)], property, value); } What this does is: get the correct dictionary containing the property modifiers of this class using the class's type, cast it to the appropiate type and pass it to the SetPropertyValue method. I want to get rid of the SetPropertyValue method in my derived class (since there are a lot of derived classes), but don't know yet how to accomplish that. I cannot just make a virtual GetEditablePropertyMap<T> method because I cannot infer a concrete type for T then. I also cannot acces my dictionary directly with a type and retrieve an EditablePropertyMap<T> from it because I cannot cast to it from object in the base class, since again I do not know T. I found some neat tricks to infere types (e.g. by adding a dummy T parameter), but cannot apply them to my specific problem. I'd highly appreciate any suggestions you may have for me.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >