Search Results

Search found 6532 results on 262 pages for 'computed columns'.

Page 78/262 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • multithreading problem with Nvidia PhysX

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults(waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not 'compute new physics' It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. I wonder if there's a way around this? I've been trying and trying, but I can't think of a solution...

    Read the article

  • In an Entity/Component system, can component data be implemented as a simple array of key-value pairs? [on hold]

    - by 010110110101
    I'm trying to wrap my head around how to organize components in an Entity Component Systems once everything in the current scene/level is loaded in memory. (I'm a hobbyist BTW) Some people seem to implement the Entity as an object that contains a list of of "Component" objects. Components contain data organized as an array of key-value pairs. Where the value is serialized "somehow". (pseudocode is loosely in C# for brevity) class Entity { Guid _id; List<Component> _components; } class Component { List<ComponentAttributeValue> _attributes; } class ComponentAttributeValue { string AttributeName; object AttributeValue; } Others describe Components as an in-memory "table". An entity acquires the component by having its key placed in a table. The attributes of the component-entity instance are like the columns in a table class Renderable_Component { List<RenderableComponentAttributeValue> _entities; } class RenderableComponentAttributeValue { Guid entityId; matrix4 transformation; // other stuff for rendering // everything is strongly typed } Others describe this actually as a table. (and such tables sound like an EAV database schema BTW) (and the value is serialized "somehow") Render_Component_Table ---------------- Entity Id Attribute Name Attribute Value and when brought into running code: class Entity { Guid _id; Dictionary<string, object> _attributes; } My specific question is: Given various components, (Renderable, Positionable, Explodeable, Hideable, etc) and given that each component has an attribute with a particular name, (TRANSLATION_MATRIX, PARTICLE_EMISSION_VELOCITY, CAN_HIDE, FAVORITE_COLOR, etc) should: an entity contain a list of components where each component, in turn, has their own array of named attributes with values serialized somehow or should components exist as in-memory tables of entity references and associated with each "row" there are "columns" representing the attribute with values that are specific to each entity instance and are strongly typed or all attributes be stored in an entity as a singular array of named attributes with values serialized somehow (could have name collisions) or something else???

    Read the article

  • How to write a network game? [closed]

    - by Tom Wijsman
    Based on Why is so hard to develop a MMO?: Networked game development is not trivial; there are large obstacles to overcome in not only latency, but cheat prevention, state management and load balancing. If you're not experienced with writing a networked game, this is going to be a difficult learning exercise. I know the theory about sockets, servers, clients, protocols, connections and such things. Now I wonder how one can learn to write a network game: How to balance load problems? How to manage the game state? How to keep things synchronized? How to protect the communication and client from reverse engineering? How to work around latency problems? Which things should be computed local and which things on the server? ... Are there any good books, tutorials, sites, interesting articles or other questions regarding this? I'm looking for broad answers, but specific ones are fine too to learn the difference.

    Read the article

  • C Programming matrix

    - by Bilal Khan
    In this program the user enters the # of columns of the matrix and then the entries of the matrix. So, for example, if the user enters 2 for column # and 1 2 3 4 for entries then the program develops a 2 by 2 matrix with 1 2 3 4 as entries. My program works perfectly in such a case. However, if the user for example had only entered 1 2 3 then my program makes a matrix with garbage values. I would like the program in such a case to exit the program. It is a simple question, but it has me baffled. #include<stdio.h> #include<stdlib.h> int main() { int m,x, n, c = 0, d,k, matrix[10][10], transpose[10][10], product[10][10]; printf("Enter the number of columns of matrix "); scanf("%d",&m); if(m<=0){ printf("You entered a invalid value."); exit(0); } else{ printf("Enter the elements of matrix \n"); for( c = 0 ; c < 10 ; c++ ) { for( d = 0 ; d < m ; d++ ) { scanf("%d",&matrix[c][d]); if (matrix[c][d] == 99) // 'x' is character variable I declared to use as a break break; // c = c+1; } if (matrix[c][d] == 99) break; } } printf("\nHere is your matrix:\n"); int i; for(i=0;i<c;i++) { for(d=0;d<m;d++) { printf("%3d ",matrix[i][d]); } printf("\n"); }

    Read the article

  • How do I properly use multithreading with Nvidia PhysX?

    - by xcrypt
    I'm having a multithreading problem with Nvidia PhysX. the SDK requires that you call Simulate() (starts computing new physics positions within a new thread) and FetchResults() (waits 'till the physics computations are done). Inbetween Simulate() and FetchResults() you may not "compute new physics". It is proposed (in a sample) that we create a game loop as such: Logic (you may calculate physics here and other stuff) Render + Simulate() at start of Render call and FetchResults at end of Render() call However, this has given me various little errors that stack up: since you actually render the scene that was computed in the previous iteration in the game loop. Does anyone have a solution to this?

    Read the article

  • Order independent transparency in particle system

    - by Stepan Zastupov
    I'm writing a particle system and would like to find a trick to achieve proper alpha blending without sorting particles because: Each particle is a point sprite in a single mesh and I can't use scene graph ability to sort transparent nodes. The system node should be properly sorted, though. Particle position is computed on shader from initial velocity, acceleration and time. In order to sort the system I would have to perform all this computations on CPU, which is something I want to avoid. Sorting hundreds of particles against camera position and uploading it on GPU each frame seams to be quiet heavy operation. Alpha testing seems to be fast enough on GLES 2.0 and works fine for non-transparent but "masked" textures. Still, it's not enough for semi-transparent particles. How would you handle this?

    Read the article

  • Ubuntu will not start due to full partitions

    - by mike
    I left my computer downloading all the night and I did download 35 GB of movies (legal ...). I restarted the computed in the morning then I booted in my encrypted Windows partition for my work. I have left my computer downloading 35GB of files and when I restarted in the morning, I booted Windows. When I tried to access Ubuntu, it failed to boot and in low-graphic mode told me that it won't boot because the partition is full. I tried rescue and it reported 0 MB free. I also cannot delete files with sudo rm as all are impossible due to a read-only file system. I can mount it in Windows but there is a "write protection" there, also. Should I try a live USB?

    Read the article

  • Web Host for Small Rails-based CMS site [closed]

    - by clem
    Possible Duplicate: How to find web hosting that meets my requirements? I am building a site for someone that uses a Rails-based content management system that I built myself. All of the Rails deployment experience I have so far has been over small intranets. I'm looking at web hosts like rackspace, because it seems like they're well-suited for Rails deployment. However, for a site that's not going to have more than a couple of hundred hits a month (if even that), I'm not sure it's necessary. I've also used Dreamhost's Phusion Passenger deployment for small projects before, but it seems barely functional and not well-supported, and I've also used Heroku for deployment, but I think a regular web host may do a little bit better, as they'll need things like Google Apps for Gmail set up. If anyone could provide some guidance on this, I'd greatly appreciate it. I get confused when I see things on rackspace like "1.5c/hour", because I'm not sure how that gets computed.

    Read the article

  • How to write a network game?

    - by TomWij
    Based on Why is so hard to develop a MMO?: Networked game development is not trivial; there are large obstacles to overcome in not only latency, but cheat prevention, state management and load balancing. If you're not experienced with writing a networked game, this is going to be a difficult learning exercise. I know the theory about sockets, servers, clients, protocols, connections and such things. Now I wonder how one can learn to write a network game: How to balance load problems? How to manage the game state? How to keep things synchronized? How to protect the communication and client from reverse engineering? How to work around latency problems? Which things should be computed local and which things on the server? ... Are there any good books, tutorials, sites, interesting articles or other questions regarding this? I'm looking for broad answers, but specific ones are fine too to learn the difference.

    Read the article

  • Triple buffering causes input lag?

    - by user782220
    Consider some time in between two vsyncs. Suppose the first display buffer is being used to display the current image, and suppose the game was really fast and computed and rendered the next image to the second display buffer and the next one after that to the third display buffer. That is the rendering to the second and third display buffer happens so fast that it occurs before the next vsync. Suppose input from the user comes in now. What you would like is for the results of the input to show up on the next vsync or (probably more typical) the vsync after that. However, with the third display buffer already rendered the input can only effect the image after that. Meaning the input will only take effect at best 3 vsyncs later. I wish i had an image to show the exact timings of what I mean.

    Read the article

  • Drag camera/view in a 3D world

    - by Dono
    I'm trying to make a Draggable view in a 3D world. Currently, I've made it using mouse position on the screen, but, when I move the distance traveled by my mouse is not equal to the distance traveled in the 3D world. So, I've tried to do that : Compute a ray from mouse position to 3D world. Calculate intersection with the ground. Check intersection difference old position <- new position. Translate camera with the difference. I've got a problem with this method: The ray is computed with the current camera's position I move the camera I compute the new ray with new camera position. The difference between old ray and new ray is now invalid. So, graphically my camera don't stop to move to previous/new position everytime. How can I do a draggable camera with another solution ? Thanks!

    Read the article

  • Developing an AI opponent for Monopoly

    - by Bernhard Zürn
    i want to develop an AI opponent for the Board Game Monopoly. I want to implement the whole Game with Prolog (XPCE). The probability for a field on the Board being hit, can be computed with Markov Chains. I already know some "best practices" like "after 50% of the playing time it does not make sense to buy out of jail because in jail you get renting fees for your fields but you don't have to pay for other fields as long as you stay in prison". The interesting question always is: buy a streetfield ? buy houses / hotels ? how much ? so i think i would have to compute some kind of future liquidity .. does anyone know how to pack that into an algorithm or how to translate it to prolog ?

    Read the article

  • 3 column layout with css display table, with first row having multiple rows?

    - by Damainman
    I am working on a new website which: Has 3 columns - Each Column being a cell First column has 3 rows (Logo, Nav, icons) - Has a Div with display: table which wraps arround 3 divs with display:table-row. Other two columns only have 1 row. With the middle column being the content area. However since this is my first time using display:table, I am running into some things that aren't so clear to me. I was trying to avoid floating divs. If I need multiple rows with one cell in each row per column, do I embed each cell in a row or just create each row and not declare cells. I understand that browsers automatically create the missing elements but I want to make sure I do this properly to avoid any side effects that might occur due to the browser automatically creating the missing elements. Edit: I think my brain is just over worked, I guess I can accomplish this by just using 3 divs in the first column instead of using a nested table div with the rows. This just popped into my head.

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

  • Ubuntu will not start any more !!!!!!!! PLEASE HELP?

    - by mike
    I really need help , I left my computer downloading all the night and i did donwload 35 go of movies ( legal ....) , I restart the computed in the morning then i booted in my encrypted windows partition for my work. Surprise in the evening when i try to boot on linux again , then it doesnt start as it's telling me low graphic mode , and doesnt boot( happen when it's really full ) . Tryed in rescue and it's telling me i have 0 mo free. Tryed in shell comand sdo root rm - Some files IMPOSSIBLE . it's telling me that my files are only in read only file systems. I mouted my other hard drive in windows as well but there is a '' Write protection '' i can only read the files. Please let me know , what solution do i have ? Should i try live usb with ubuntu ? Thanks guys

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • Would this data requirement suit a Document -Oriented database?

    - by codecowboy
    I have a requirement to allow users to fill in journal/diary entries per day. I want to provide a handful of known journal templates with x columns to fill in. An example might be a thought diary; a user has to record a thought in one column, describe the situation, rate how they felt etc. The other requirement is that a user should be able to create their own diary templates. They might have a need for a 10 column diary entry per day and might need to rate some aspect out of 50 instead of 10. In an RDBMS, I can see this getting quite complicated. I could have individual tables for my known templates as the fields will be fixed. But for custom diary templates I imagine I would would need a table storing custom_field_types (the diary columns), a table storing entries referencing their field types (custom_entries) and then a third custom_diary table which would store rows matching custom_entries to diaries. Leaving performance / scaling aside, would it be any simpler or make more sense to use a document oriented database like MongoDB to store this data? This is for a web application which might later need an API for mobile devices.

    Read the article

  • Checking validation of entries in a Sudoku game written in Java

    - by Mico0
    I'm building a simple Sudoku game in Java which is based on a matrix (an array[9][9]) and I need to validate my board state according to these rules: all rows have 1-9 digits all columns have 1-9 digits. each 3x3 grid has 1-9 digits. This function should be efficient as possible for example if first case is not valid I believe there's no need to check other cases and so on (correct me if I'm wrong). When I tried doing this I had a conflict. Should I do one large for loop and inside check columns and row (in two other loops) or should I do each test separately and verify every case by it's own? (Please don't suggest too advanced solutions with other class/object helpers.) This is what I thought about: Main validating function (which I want pretty clean): public boolean testBoard() { boolean isBoardValid = false; if (validRows()) { if (validColumns()) { if (validCube()) { isBoardValid = true; } } } return isBoardValid; } Different methods to do the specific test such as: private boolean validRows() { int rowsDigitsCount = 0; for (int num = 1; num <= 9; num++) { boolean foundDigit = false; for (int row = 0; (row < board.length) && (!foundDigit); row++) { for (int col = 0; col < board[row].length; col++) { if (board[row][col] == num) { rowsDigitsCount++; foundDigit = true; break; } } } } return rowsDigitsCount == 9 ? true : false; } I don't know if I should keep doing tests separately because it looks like I'm duplicating my code.

    Read the article

  • Synthetic database records

    - by michipili
    Assume we are getting some statistics from a customer which we analyse and we send our comments to the customer. Now, the customer tells us that the statistic they computed between January and March are based on a wrong methodology and sends us corrected series. We want perform analysis with the wrong and with the correct set of data, which are huge and only differ from January to March. Therefore, we need something like synthetic database records implementing the following logic: synthetic[1] = wrong_data synthetic[2] = correct_data between Januar and March, wrong_data otherwise With this, we can easily perform our analyses on synthetic records. Should such synthetic records be implemented in the application logic or on the side of the database? What are common pitfalls of such an implementation?

    Read the article

  • Creating a Predicate Builder extension method

    - by Rippo
    I have a Kendo UI Grid that I am currently allowing filtering on multiple columns. I am wondering if there is a an alternative approach removing the outer switch statement? Basically I want to able to create an extension method so I can filter on a IQueryable<T> and I want to drop the outer case statement so I don't have to switch column names. private static IQueryable<Contact> FilterContactList(FilterDescriptor filter, IQueryable<Contact> contactList) { switch (filter.Member) { case "Name": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Firstname.StartsWith(filter.Value.ToString()) || w.Lastname.StartsWith(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Firstname.Contains(filter.Value.ToString()) || w.Lastname.Contains(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).Contains( filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Firstname == filter.Value.ToString() || w.Lastname == filter.Value.ToString() || (w.Firstname + " " + w.Lastname) == filter.Value.ToString()); break; } break; case "Company": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Company.StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Company.Contains(filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Company == filter.Value.ToString()); break; } break; } return contactList; } Some additional information, I am using NHibernate Linq. Also another problem is that the "Name" column on my grid is actually "Firstname" + " " + "LastName" on my contact entity. We can also assume that all filterable columns will be strings.

    Read the article

  • CSS Positioning

    - by Davey
    Trying to mess with this wordpress theme and can't figure out why the sidebar is stacking underneath the content block. Any help would be very appreciated. http://www.buffalostreetbooks.com/events CSS: body { font-family: Arial, Helvetica, Verdana, Sans-serif; font-size: 10pt; background-color: #692022; background-image:url("http://www.buffalostreetbooks.com/wp-content/themes/autumn-leaves/images/repeatflower.png"); } body,h1#blog-title { margin: 0; padding: 0; } a { color: blue; } a:hover { color: #FF8C00; } a img { border: 0 none; } #wrapper { width: 960px; margin: 0 auto; background-color: #F4FBF4; border-left: 1px solid #ccc; border-right: 1px solid #ccc; } #header { background-image:url("http://www.buffalostreetbooks.com/wp-content/themes/autumn-leaves/images/headertime.png"); width:768px; height: 200px; } #inner-header { padding: 125px 1em 0; } h1#blog-title { font-size: 2em; } h1#blog-title a { color: #800000; } .entry-title a { color: #CD853F; } h1#blog-title a, .entry-title a, #footer a { text-decoration: none; } h1#blog-title a:hover, .entry-title a:hover, #footer a:hover { text-decoration: underline; } div.skip-link { display: none; } #menu { border-bottom: 1px solid #ccc; } #menu a { color: #000; } #menu a:hover { text-decoration: underline; } #menu li.current_page_item a, #menu li.current_page_item a:hover { background-color: #DFC28B; text-decoration: none; } #content { padding: 1em; width:600px; } .entry-title { font-size: 1.5em; margin: 1em 0 0 0; } abbr.published { color: #666; border: 0 none; } .entry-meta, .entry-date { color: #666; } #comments-list .avatar { float: left; margin-right: 1em; } #comments-list .n { font-weight: bold; } .entry-meta, .comment-meta { font-style: italic; } #comments-list p { clear: left; } #primary { padding-left: 1em; font-size: 0.9em; border-left: 1px solid #ccc; border-bottom: 1px solid #ccc; background-color: #FFFACD; } #footer { text-align: center; font-size: 0.8em; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; margin-bottom: 1em; } #inner-footer { padding: 1em 0; } .entry-meta, .entry-meta a, .comment-meta, .comment-meta a, .sidebar, .sidebar a, #footer, #footer a { color: #666; } /* LAYOUT: Two-Column (Right) DESCRIPTION: Two-column fluid layout with one sidebars right of content */ div#container { margin:0 0 0 0; width:960px; height:100%; } div#content { margin:0 0 0 0; } div.sidebar { overflow:hidden; width:280px; min-height:500px; clear:both; } div#secondary { clear:right; } div#footer { clear:both; width:100%; } /* Just some example content */ div#menu { height:2em; width:100%; } div#menu ul,div#menu ul ul { line-height:2em; list-style:none; margin:0; padding:0; } div#menu ul a { display:block; margin-right:1em; padding:0 0.5em; text-decoration:none; } div#menu ul ul ul a { font-style:italic; } div#menu ul li ul { left:-999em; position:absolute; } div#menu ul li:hover ul { left:auto; } .entry-title,.entry-meta { clear:both; } div#primary { } form#commentform .form-label { margin:1em 0 0; } form#commentform span.required { background:#fff; color:#c30; } form#commentform,form#commentform p { padding:0; } input#author,input#email,input#url,textarea#comme nt { padding:0.2em; } div.comments ol li { margin:0 0 3.5em; } textarea#comment { height:13em; margin:0 0 0.5em; overflow:auto; width:66%; } .alignright,img.alignright{ float:right; margin:1em 0 0 1em; } .alignleft,img.alignleft{ float:left; margin:1em 1em 0 0; } .aligncenter,img.aligncenter{ display:block; margin:1em auto; text-align:center; } div.gallery { clear:both; height:180px; margin:1em 0; width:100%; } p.wp-caption-text{ font-style:italic; } div.gallery dl{ margin:1em auto; overflow:hidden; text-align:center; } div.gallery dl.gallery-columns-1 { width:100%; } div.gallery dl.gallery-columns-2 { width:49%; } div.gallery dl.gallery-columns-3 { width:33%; } div.gallery dl.gallery-columns-4 { width:24%; } div.gallery dl.gallery-columns-5 { width:19%; } div#nav-above { margin-bottom:1em; } div#nav-below { margin-top:1em; } div#nav-images { height:150px; margin:1em 0; } div.navigation { height:1.25em; } div.navigation div.nav-next { float:right; text-align:right; } div.sidebar h3 { font-size:1.2em; } div.sidebar input#s { width:7em; } div.sidebar li { list-style:none; margin:0 0 2em; } div.sidebar li form { margin:0.2em 0 0; padding:0; } div.sidebar ul ul { margin:0 0 0 2em; } div.sidebar ul ul li { list-style:disc; margin:0; } div.sidebar ul ul ul { margin:0 0 0 0.5em; } div.sidebar ul ul ul li { list-style:circle; } div#menu ul li,div.gallery dl,div.navigation div.nav-previous { float:left; } input#author,input#email,input#url,div.navigation div { width:50%; } div.gallery *,div.sidebar div,div.sidebar h3,div.sidebar ul { margin:0; padding:0; }

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Using LogParser - part 2

    - by fatherjack
    PersonAddress.csv SalesOrderDetail.tsv In part 1 of this series we downloaded and installed LogParser and used it to list data from a csv file. That was a good start and in this article we are going to see the different ways we can stream data and choose whether a whole file is selected. We are also going to take a brief look at what file types we can interrogate. If we take the query from part 1 and add a value for the output parameter as -o:datagrid so that the query becomes LOGPARSER "SELECT top 15 * FROM C:\LP\person_address.csv" -o:datagrid and run that we get a different result. A pop-up dialog that lets us view the results in a resizable grid. Notice that because we didn't specify the columns we wanted returned by LogParser (we used SELECT *) is has added two columns to the recordset - filename and rownumber. This behaviour can be very useful as we will see in future parts of this series. You can click Next 10 rows or All rows or close the datagrid once you are finished reviewing the data. You may have noticed that the files that I am working with are different file types - one is a csv (comma separated values) and the other is a tsv (tab separated values). If you want to convert a file from one to another then LogParser makes it incredibly simple. Rather than using 'datagrid' as the value for the output parameter, use 'csv': logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\Sales_SalesOrderDetail.csv FROM C:\Sales_SalesOrderDetail.tsv" -i:tsv -o:csv Those familiar with SQL will not have to make a very big leap of faith to making adjustments to the above query to filter in/out records from the source file. Lets get all the records from the same file where the Order Quantity (OrderQty) is more than 25: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailOver25.csv FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty > 25" -i:tsv -o:csv Or we could find all those records where the Order Quantity is equal to 25 and output it to an xml file: logparser "SELECT SalesOrderID, SalesOrderDetailID, CarrierTrackingNumber, OrderQty, ProductID, SpecialOfferID, UnitPrice, UnitPriceDiscount, LineTotal, rowguid, ModifiedDate into C:\LP\Sales_SalesOrderDetailEq25.xml FROM C:\LP\Sales_SalesOrderDetail.tsv WHERE orderqty = 25" -i:tsv -o:xml All the standard comparison operators are to be found in LogParser; >, <, =, LIKE, BETWEEN, OR, NOT, AND. Input and Output file formats. LogParser has a pretty impressive list of file formats that it can parse and a good selection of output formats that will let you generate output in a format that is useable for whatever process or application you may be using. From any of these To any of these IISW3C: parses IIS log files in the W3C Extended Log File Format.   NAT: formats output records as readable tabulated columns. IIS: parses IIS log files in the Microsoft IIS Log File Format. CSV: formats output records as comma-separated values text. BIN: parses IIS log files in the Centralized Binary Log File Format. TSV: formats output records as tab-separated or space-separated values text. IISODBC: returns database records from the tables logged to by IIS when configured to log in the ODBC Log Format. XML: formats output records as XML documents. HTTPERR: parses HTTP error log files generated by Http.sys. W3C: formats output records in the W3C Extended Log File Format. URLSCAN: parses log files generated by the URLScan IIS filter. TPL: formats output records following user-defined templates. CSV: parses comma-separated values text files. IIS: formats output records in the Microsoft IIS Log File Format. TSV: parses tab-separated and space-separated values text files. SQL: uploads output records to a table in a SQL database. XML: parses XML text files. SYSLOG: sends output records to a Syslog server. W3C: parses text files in the W3C Extended Log File Format. DATAGRID: displays output records in a graphical user interface. NCSA: parses web server log files in the NCSA Common, Combined, and Extended Log File Formats. CHART: creates image files containing charts. TEXTLINE: returns lines from generic text files. TEXTWORD: returns words from generic text files. EVT: returns events from the Windows Event Log and from Event Log backup files (.evt files). FS: returns information on files and directories. REG: returns information on registry values. ADS: returns information on Active Directory objects. NETMON: parses network capture files created by NetMon. ETW: parses Enterprise Tracing for Windows trace log files and live sessions. COM: provides an interface to Custom Input Format COM Plugins. So, you can query data from any of the types on the left and really easily get it into a format where it is ready for analysis by other tools. To a DBA or network Administrator with an enquiring mind this is a treasure trove. In part 3 we will look at working with multiple sources and specifically outputting to SQL format. See you there!

    Read the article

  • ODI 12c's Mapping Designer - Combining Flow Based and Expression Based Mapping

    - by Madhu Nair
    post by David Allan ODI is renowned for its declarative designer and minimal expression based paradigm. The new ODI 12c release has extended this even further to provide an extended declarative mapping designer. The ODI 12c mapper is a fusion of ODI's new declarative designer with the familiar flow based designer while retaining ODI’s key differentiators of: Minimal expression based definition, The ability to incrementally design an interface and to extract/load data from any combination of sources, and most importantly Backed by ODI’s extensible knowledge module framework. The declarative nature of the product has been extended to include an extensible library of common components that can be used to easily build simple to complex data integration solutions. Big usability improvements through consistent interactions of components and concepts all constructed around the familiar knowledge module framework provide the utmost flexibility. Here is a little taster: So what is a mapping? A mapping comprises of a logical design and at least one physical design, it may have many. A mapping can have many targets, of any technology and can be arbitrarily complex. You can build reusable mappings and use them in other mappings or other reusable mappings. In the example below all of the information from an Oracle bonus table and a bonus file are joined with an Oracle employees table before being written to a target. Some things that are cool include the one-click expression cross referencing so you can easily see what's used where within the design. The logical design in a mapping describes what you want to accomplish  (see the animated GIF here illustrating how the above mapping was designed) . The physical design lets you configure how it is to be accomplished. So you could have one logical design that is realized as an initial load in one physical design and as an incremental load in another. In the physical design below we can customize how the mapping is accomplished by picking Knowledge Modules, in ODI 12c you can pick multiple nodes (on logical or physical) and see common properties. This is useful as we can quickly compare property values across objects - below we can see knowledge modules settings on the access points between execution units side by side, in the example one table is retrieved via database links and the other is an external table. In the logical design I had selected an append mode for the integration type, so by default the IKM on the target will choose the most suitable/default IKM - which in this case is an in-built Oracle Insert IKM (see image below). This supports insert and select hints for the Oracle database (the ANSI SQL Insert IKM does not support these), so by default you will get direct path inserts with Oracle on this statement. In ODI 12c, the mapper is just that, a mapper. Design your mapping, write to multiple targets, the targets can be in the same data server, in different data servers or in totally different technologies - it does not matter. ODI 12c will derive and generate a plan that you can use or customize with knowledge modules. Some of the use cases which are greatly simplified include multiple heterogeneous targets, multi target inserts for Oracle and writing of XML. Let's switch it up now and look at a slightly different example to illustrate expression reuse. In ODI you can define reusable expressions using user functions. These can be reused across mappings and the implementations specialized per technology. So you can have common expressions across Oracle, SQL Server, Hive etc. shielding the design from the physical aspects of the generated language. Another way to reuse is within a mapping itself. In ODI 12c expressions can be defined and reused within a mapping. Rather than replicating the expression text in larger expressions you can decompose into smaller snippets, below you can see UNIT_TAX AMOUNT has been defined and is used in two downstream target columns - its used in the TOTAL_TAX_AMOUNT plus its used in the UNIT_TAX_AMOUNT (a recording of the calculation).  You can see the columns that the expressions depend on (upstream) and the columns the expression is used in (downstream) highlighted within the mapper. Also multi selecting attributes is a convenient way to see what's being used where, below I have selected the TOTAL_TAX_AMOUNT in the target datastore and the UNIT_TAX_AMOUNT in UNIT_CALC. You can now see many expressions at once now and understand much more at the once time without needlessly clicking around and memorizing information. Our mantra during development was to keep it simple and make the tool more powerful and do even more for the user. The development team was a fusion of many teams from Oracle Warehouse Builder, Sunopsis and BEA Aqualogic, debating and perfecting the mapper in ODI 12c. This was quite a project from supporting the capabilities of ODI in 11g to building the flow based mapping tool to support the future. I hope this was a useful insight, there is so much more to come on this topic, this is just a preview of much more that you will see of the mapper in ODI 12c.

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >