Search Results

Search found 109760 results on 4391 pages for 'ado net entity data model'.

Page 289/4391 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • SQL Server 2008 - Script Data as Insert Statements from SSIS Package

    - by Brandon King
    SQL Server 2008 provides the ability to script data as Insert statements using the Generate Scripts option in Management Studio. Is it possible to access the same functionality from within a SSIS package? Here's what I'm trying to accomplish... I have a scheduled job that nightly scripts out all the schema and data for a SQL Server 2008 database and then uses the script to create a "mirror copy" SQLCE 3.5 database. I've been using Narayana Vyas Kondreddi's sp_generate_inserts stored procedure to accomplish this, but it has problems with some datatypes and greater-than-4K columns (holdovers from SQL Server 2000 days). The Script Data function looks like it could solve my problems, if only I could automate it. Any suggestions?

    Read the article

  • How does jQuery .data() work?

    - by kazanaki
    My Javascript knowledge is pretty limited. Instead of asking several javascript questions I got the "message" from Stack overflow and started using jQuery right away in order to save me some time. However several times I do not undestand the "magic" behind jQuery and I would love to learn the details. I want to use .data() in my application. The examples are very helpful. I do not understand however WHERE these values are stored. I inspect the webpage with Firebug and as soon as .data() saves an object to a dom element, I do not see any change in Firebug (either HTML or Dom tabs). I tried to look at jQuery source, but it is very advanced for my Javascript knowledge and I lost myself. So the question is: Where do the values stored by jQuery.data() actually go? Can I inspect/locate/list/debug them using a tool?

    Read the article

  • Excel data connection - remove header row?

    - by ekoner
    The excel spreadsheet is connecting to SQL server 2005 using the connection string below: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=XXXXXX;Data Source=XXXXXX;Extended Properties="HDR=No";Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;Workstation ID=XXXXXX;Use Encryption for Data=False;Tag with column collation when possible=False It then pulls data from a view into Excel. The business user wants this information without a header row. This will allow her to review then save as a "headless" csv in SAGE file format. I attempted to alter the connection string by adding HDR=No but that hasn't worked. Additionally, I can't delete the header row. Deleting the content replaces the column names with "Column 1" etc. Any ideas appreciated!

    Read the article

  • How to get dynamic SQL result from SP in entity framework?

    - by KentZhou
    Suppose I have following SP to run a dynamic sql ALTER PROCEDURE [dbo].[MySP] AS BEGIN declare @sql varchar(4000) select @sql = 'select cnt = count(*) from Mytable ..... '; exec (@sql) END then in edmx, I add the sp and import function for this sp. the return type is scalars int32. then I want to use this function in code like: int? result = context.MySP(); I got error said "cannot implicitly convert type System.Data.Objects.ObjectResults to int?" If use var result = context.MySP(); then Single() cann't be applied to context.MySP(). How to get the result for this case?

    Read the article

  • jQuery.post dynamic data callback function

    - by FFish
    I have a script that requires quite e few seconds of processing, up to about minute. The script resizes an array of images, sharpens them and finally zips them up for the user to download. Now I need some sort of progress messages. I was thinking that with jQuery's .post() method the data from the callback function would progressively update, but that doesn't seem to work. In my example I am just using a loop to simulate my script: $(document).ready(function() { $('a.loop').click(function() { $.post('loop.php', {foo:"bar"}, function(data) { $("div").html(data); }); return false; }); }); loop.php: for ($i = 0; $i <= 100; $i++) { echo $i . "<br />"; } echo "done";

    Read the article

  • CORE DATA objectId changes constantly

    - by mongeta
    Hello, I have some data that I export into an XML file and put in a remote FTP Server. I have to identified each object with a unique attribute, it doesn't matter wich is, but must be persistent always = it can never change. I don't want to create a unique attribute, sequence, serial, etc. I'm using the objectID but every time I use it a get a new reference. I know that before the object has been saved, it has a 'temporal id', but once it's saved, it gets the definitive. I'm not seeing this, never. When I export, just fetch all data and loop, and always I get a new reference: NSURL *objectID = [[personalDataObject objectID] URIRepresentation]; // some of id received for the SAME OBJECT (no changes made, saved, ...) // 61993296 // 62194624 thanks, r. edit I was using %d instead of %@, now the returned data is: x-coredata://F46F3300-8FED-4876-B0BF-E4D2A9D80913/DataEntered/p1 x-coredata://F46F3300-8FED-4876-B0BF-E4D2A9D80913/DataEntered/p2

    Read the article

  • Dynamic model interactions

    - by Richard
    I am just curious as to how in many games (namely games like arkham asylum/city, manhunt, hitman) do they make it so that your character can "grab" a character in front of you and do stuff to them. I know this may sound very confusing but for an example go to youtube and search "hitman executions", and the first video is an example of what i'm asking. Basically I'm wondering how they make your model dynamically interact with whatever other model you come across, so in hitman when you come up behind some one with the fibre wire you strangle the other character or if you have the anesthetic you come up behind some person and put your hand over there mouth while they struggle and slowly go to the floor where you lay them down. I am confused as to whether it was animated to use two models using specific bone/skeletal identifiers, if it is just two completely separate animations that are played at the correct time to make it look like they are actually interacting or something else all together. I am not an animator so i assume most of what i just said is not right but i hope that some one can understand what i mean and provide an answer. PS) I am a programmer and I am in the process of building a hitmanesque game, just because i love that style of game and I want to increase my skills on something fun, so if you do know what i'm talking about have some examples with involving both models and programming (i use c++ and mainly Ogre3D at the moment but i am getting into unity and XNA) i would greatly appreciate it. Thanks.

    Read the article

  • Stackoverflow style data list view

    - by kst
    Hi Buddies I'm beginner in ASP.Net.I'm know developing the small project for searching data from DB. I use ASP.Net web form and ADO.Net.I would like to show the data list like stackoverflow because I don't want to use GridView. I've some data fields to show example. Title Description Date Keyword Please check out my screen shoot now I use Literal for draft. so Please point to me what control I should use and that control will attach with Pager for the data list. Important: Please let me know how to make layout template for the data list (Details) Thanks

    Read the article

  • mysql data type confusion

    - by zen
    So this is more of a generalized question about MySQLs data types. I'd like to store a 5-digit US zip code (zip_code) properly in this example. A county has 10 different cities and 5 different zip codes. city | zip code -------+---------- city 0 | 33333 city 1 | 11111 city 2 | 22222 city 3 | 33333 city 4 | 44444 city 5 | 55555 city 6 | 33333 city 7 | 33333 city 8 | 44444 city 9 | 22222 I would typically structure a table like this as varchar(50), int(5) and not think twice about it. (1) If we wanted to ensure that this table had only one of 5 different zip codes we should use the enum data type, right? Now think of a similar scenario on a much larger scale. In a state, there are five-hundred cities with 418 different zip codes. (2) Should I store 418 zip codes as an enum data type OR as an int and create another table to reference?

    Read the article

  • Can you Export/Import Flex (4) Data Services?

    - by mkraken
    Flex newb here. I'm working in flashbuilder 4 (flex4?), and am being asked to create the client-side data services integration 'layer' in a flex app. There is another team working on the actual UI/Presentation. Both parts must be deployed in a single swf. If I use the data/services wizard to build out my service connections (and generate the ActionScript), is it possible to export these 'connections' so that they can easily be imported into another project? Or must they be defined through the wizard all over again? The other team wants to be able to see the connections appear in the new project's Data/Services inspector (IDE Tab). Thanks!

    Read the article

  • convert string data array to list

    - by prince23
    hi, i have an string data array which contains data like this 5~kiran 2~ram 1~arun 6~rohan now a method returns an value like string [] data public string [] names() { return data.Toarray() } public class Person { public string Name { get; set; } public int Age { get; set; } } List persons = new List(); string [] names =names(); now i need to copy all the data from an string array to an list and finally bind to grid view gridview.datasoutrce= persons how can i do it. is there any built in method to do it thanks in advance prince

    Read the article

  • Managing changes in memory-based data format

    - by kamziro
    So I've been using a compact data type in c++, and saving from memory or loading from the file involves just copying the bits of memory in and out. However, the obvious drawback of this is that if you need to add/remove elements on the data, it becomes kind of messy. There's also problems with versioning, suppose you distribute a program which uses version A of the data, and then the next day you make version B of it, and then later on version C. I suppose this can be solved by using something like xml or json. But suppose you can't do that for technical reasons. What is the best way to do this, apart from having to make different if cases etc (which would be pretty ugly, I'd imagine)

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • Model a chain with different elements in Unity 3D

    - by Alex
    I have to model, in unity 3D, a chain that is composed of various elements. some flexible, some rigid. The idea is to realize a human-chain where each person is linked to the other by their hands. I've not tried to implement it yet as i've no idea on what could be a good way to do it. In the game i've to manage a lot of chains of people... maybe also 100 chains composed of 11-15 people. The chain will be pretty simple and there won't be much interaction... Probabily some animation of the people one at time for each chain and some physic reaction (for example pushing a people in a chain should slightle flex the chain) the very problem of this work is that in the chain each object is composed by flexible parts (arms) and rigid parts (the body) and that the connection should remain firm... just like when people handshake... hands are firm and are the wrists to move. i can use C4D to model the meshes. i know this number may cause performance problems, but it's also true i will use low-poly versions of human. (for the real it won't be human, but very simple toonish characters that have harms and legs). So actually i'm trying to find a way to manage this in a way it can work, the performance will be a later problem that i can solve. If there is not a fast 'best-practiced' solution and you have any link/guide/doc that could help me in finding a way to realize this is, it would be very appreciated anyway if you post it. thanks

    Read the article

  • Problems persisting Core Data structures on iPhone/iPad

    - by Rivier
    I have an iPhone/iPad app using Core Data to keep my application data. Sometimes, even though I don't get any error messages, the data is not really saved so when the app starts anew, it's all gone. This problem seems to disappear after physically rebooting the device, but otherwise it's pretty random and hard to track. Has anyone seen a similar issue? Also, it seems to happen more often in the iPhone 1st generation, less so in the 3G/3GS, and seldom in the iPad. Very strange...

    Read the article

  • Program-wide data, C++

    - by bobobobo
    I'd like to make program-wide data in a C++ program. The easiest way to do it in C# is just public static members. C#: public static class DataContainer { public static Object data1 ; public static Object data2 ; } In C++ you can do the same thing C++ global data way#1: class DataContainer { public: static Object data1 ; static Object data2 ; } ; Object DataContainer::data1 ; Object DataContainer::data2 ; However there's also extern C++ global data way #2: class DataContainer { public: Object data1 ; Object data2 ; } ; extern DataContainer * dataContainer ; // instantiate in .cpp file Which is better, or possibly another way which I haven't thought about?

    Read the article

  • Keeping a domain model consistent with actual data

    - by fstuijt
    Recently domain driven design got my attention, and while thinking about how this approach could help us I came across the following problem. In DDD the common approach is to retrieve entities (or better, aggregate roots) from a repository which acts as a in-memory collection of these entities. After these entities have been retrieved, they can be updated or deleted by the user, however after retrieval they are essentially disconnected from the data source and one must actively inform the repository to update the data source and make is consistent again with our in-memory representation. What is the DDD approach to retrieving entities that should remain connected to the data source? For example, in our situation we retrieve a series of sensors that have a specific measurement during retrieval. Over time, these measurement values may change and our business logic in the domain model should respond to these changes properly. E.g., domain events may be raised if a sensor value exceeds a predefined threshold. However, using the repository approach, these sensor values are just snapshots, and are disconnected from the data source. Does any of you have an idea on how to solve this following the DDD approach?

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • CSOM (Client Side Object Model) - What's new with SharePoint 2013

    - by KunaalKapoor
    SharePoint CSOMThe Client-Side Object Model or CSOM came out with SharePoint 2010. CSOM is accessible through client.svc but all client.svc calls must go through supported WFC entry points (supported entry points are .NET, Silverlight and JavaScript). So a developer would need to use client side proxy objects exposed by either a .NET assembly or a JavaScript library. Changes with SharePoint 2013REST Capabilities - Direct access to client.svcNew APIs - App ModelREST CapabilitiesOne of the most important changes to the CSOM with SharePoint 2013 is that the web service entry point of client.svc has been extended to allow direct access  via REST-Based web service calls. This is a really critical change since its going to make the SharePoint platform accessible to any other platform, opening the horizons of integration and collaboration with other REST based platforms and devices. OData (a really popular standard data access API for HTTP-based clients) is supported similar to 2010 but will be a more important aspect of SharePoint 2013 development.New API'sCSOM for SharePoint 2013 has been buffed up with several new APIs for not only SharePoint server functionality but also an API for Windows Phone applications. For a SharePoint 2010 farm most of the new APIs mentioned below are available only via server side APIs:SearchTaxonomyPublishingWorkflowUser ProfilesE-DiscoveryAnalyticsBusiness DataIRMFeedsSharePoint 2013 remote APIs being accessible through both CSOM and REST is very important to the new app model where developers can no longer run code in a SharePoint environment nor can they access the server-side APIs. So CSOM plays the savior here.Also, you can now substitute the alias '_api' in order to reference '_vti_bin/client.svc'.

    Read the article

  • R: Converting a list of data frames into one data frame

    - by JD Long
    I have code that at one place ends up with a list of data frames which I really want to convert to a single big data frame. I got some pointers from an earlier question which was trying to do something similar but more complex. Here's an example of what I am starting with (this is grossly simplified for illustration): listOfDataFrames <- NULL for (i in 1:100) { listOfDataFrames[[i]] <- data.frame(a=sample(letters, 500, rep=T), b=rnorm(500), c=rnorm(500)) } I am currently using this: df <- do.call("rbind", listOfDataFrames) *EDIT* whoops. In my haste to implement what I had "learned" in a previous question I totally screwed up. Yes, the unlist() is just plain wrong. I'm editing that out of the question above.

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • How to reduce the number of points in (x,y) data

    - by Gowtham
    I have a set of data points: (x1, y1) (x2, y2) (x3, y3) ... (xn, yn) The number of sample points can be thousands. I want to represent the same curve as accurately as possible with minimal (lets suppose 30) set of points. I want to capture as many inflection points as possible. However, I have a hard limit on the number of allowed points to represent the data. What is the best algorithm to achieve the same? Is there any free software library that can help? PS: I have tried to implement relative slope difference based point elimination, but this does not always result in the best possible data representation. Thanks for your time. -Gowtham

    Read the article

  • Performing multiple fetches in Core Data within the same view

    - by yesimarobot
    I have my CD store setup and everything is working. Once my initial fetch is performed, I need to perform several fetches based on calculations using the data from my first fetch. The examples provided from Apple are great, and helped me get everything going but I'm struggling with executing successive fetches. Any suggestions, or tutorial links are appreciated. Table View loads data from CD store. When a user taps a row it pushes a detail view The detail view loads details from CD. [THE ABOVE STEPS ARE ALL WORKING] I perform several calculations on the data fetched in the detail view. I need to then perform several other fetches based on the results of my calculations.

    Read the article

  • .rdlc reporting bound to Object Data Source in Three layer Application

    - by Saeedouv
    Hi, i have the following situation, i have a Reporting layer(stand alone) in asp.net application(NOT website, this means NO App_Code folder exists), and i want just to create Object Data Source to take an Object in a separate layer(lets say from Data Access Layer), and then to use that Object Data Source to create a report, i have spent my whole day working around that, tons of work around's and articles on the web, but does not mention what i really want to do, any answer is appriciated... just to make things more clear here, assume the following: i have a solution with the follwoing layers, UI Reporting(has NO Employees object) just a reference Business Logic Data Access Layer(Employees--GetEmployees(), all i need is as mentioned above, i want to create Object Data Source from Reporting layer, to take Employee object from DAL, and then use it's GetEmployees method to be added to report, i think its more clear now, since also Reporting layer has NO App_Code folder.

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >