Search Results

Search found 14679 results on 588 pages for 'index merge'.

Page 101/588 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • SEO - Does google+other search engines index links within <noscript> tags?

    - by Joe
    I have setup some dropdown menus allowing users to find pages on my website by selecting options across multiple dropdowns: eg. Color of Car, Year This would generate a link like: mysite.xyz/blue/2010/ The only problem is, because this link is dynamically assembled with Javascript, I've also had to assemble each possible combination from the dropdowns into a list like: <noscript> No javascript enabled? Here are all the links: <a href='mysite.xyz/blue/2009/'>mysite.xyz/blue/2009/</a> <a href='mysite.xyz/blue/2010/'>mysite.xyz/blue/2010/</a> <a href='mysite.xyz/red/2009/'>mysite.xyz/red/2009/</a> <a href='mysite.xyz/red/2010/'>mysite.xyz/red/2010/</a> </noscript> My question is, if I put these in a tag like this, will I be penalized or anything by search engines such as Google? I've already been doing so for some navigational stuff which required offsets etc. However, now I would be listing a whole list of links here too. I want to provide them here, moreso so that google can actually index my pages - but for those without javascript, they can still navigate too. Your thoughts? Also.. even though I have some links that appear to have been indexed, I AM NOT 100% SURE, which is why I'm asking :P

    Read the article

  • How to call a method in another class using the arraylist index in java?

    - by Puchatek
    Currently I have two classes. A Classroom class and a School class. public void addTeacherToClassRoom(Classroom myClassRoom, String TeacherName) I would like my method addTeacherToClassRoom to use the Classroom Arraylist index number to setTeacherName e.g. int 0 = maths int 1 = science I would like to setTeacherName "Daniel" in int 1 science. many, thanks public class Classroom { private String classRoomName; private String teacherName; public void setClassRoomName(String newClassRoomName) { classRoomName = newClassRoomName; } public String returnClassRoomName() { return classRoomName; } public void setTeacherName(String newTeacherName) { teacherName = newTeacherName; } public String returnTeacherName() { return teacherName; } } import java.util.ArrayList; public class School { private ArrayList<Classroom> classrooms; private String classRoomName; private String teacherName; public School() { classrooms = new ArrayList<Classroom>(); } public void addClassRoom(Classroom newClassRoom, String theClassRoomName) { classrooms.add(newClassRoom); classRoomName = theClassRoomName; } public void addTeacherToClassRoom(Classroom myClassRoom, String TeacherName) { myClassRoom.setTeacherName(TeacherName); } }

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • How to Exclude Directory Effectively from Mod_REWRITE

    - by Codex73
    The problem: 'css' directory gets rewritten also to 'index.php' and displays somehow 'index.php' without style. Should display error as it has it's own htaccess with 'Options All -Indexes' Facts: 'css' subdir doesn't have an index file.(no htaccess on this folder) 'store' subdir does have index file and doesn't get rewritten. (no htaccess on this folder) RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^.+/?$ index.php [NC,L] How can i effectively remove 'css' and 'css/' from the above rule? Tried some variations already.

    Read the article

  • Fixing up Visual Studio&rsquo;s gitignore , using IFix

    - by terje
    Originally posted on: http://geekswithblogs.net/terje/archive/2014/06/13/fixing-up-visual-studiorsquos-gitignore--using-ifix.aspxDownload tool Is there anything wrong with the built-in Visual Studio gitignore ???? Yes, there is !  First, some background: When you set up a git repo, it should be small and not contain anything not really needed.  One thing you should not have in your git repo is binary files. These binary files may come from two sources, one is the output files, in the bin and obj folders.  If you have a  gitignore file present, which you should always have (!!), these folders are excluded by the standard included file (the one included when you choose Team Explorer/Settings/GitIgnore – Add.) The other source are the packages folder coming from your NuGet setup.  You do use NuGet, right ?  Of course you do !  But, that gitignore file doesn’t have any exclude clause for those folders.  You have to add that manually.  (It will very probably be included in some upcoming update or release).  This is one thing that is missing from the built-in gitignore. To add those few lines is a no-brainer, you just include this: # NuGet Packages packages/* *.nupkg # Enable "build/" folder in the NuGet Packages folder since # NuGet packages use it for MSBuild targets. # This line needs to be after the ignore of the build folder # (and the packages folder if the line above has been uncommented) !packages/build/ Now, if you are like me, and you probably are, you add git repo’s faster than you can code, and you end up with a bunch of repo’s, and then start to wonder: Did I fix up those gitignore files, or did I forget it? The next thing you learn, for example by reading this blog post, is that the “standard” latest Visual Studio gitignore file exist at https://github.com/github/gitignore, and you locate it under the file name VisualStudio.gitignore.  Here you will find all the new stuff, for example, the exclusion of the roslyn ide folders was commited on May 24th.  So, you think, all is well, Visual Studio will use this file …..     I am very sorry, it won’t. Visual Studio comes with a gitignore file that is baked into the release, and that is by this time “very old”.  The one at github is the latest.  The included gitignore miss the exclusion of the nuget packages folder, it also miss a lot of new stuff, like the Roslyn stuff. So, how do you fix this ?  … note .. while we wait for the next version… You can manually update it for every single repo you create, which works, but it does get boring after a few times, doesn’t it ? IFix Enter IFix ,  install it from here. IFix is a command line utility (and the installer adds it to the system path, you might need to reboot), and one of the commands is gitignore If you run it from a directory, it will check and optionally fix all gitignores in all git repo’s in that folder or below.  So, start up by running it from your C:/<user>/source/repos folder. To run it in check mode – which will not change anything, just do a check: IFix  gitignore --check What it will do is to check if the gitignore file is present, and if it is, check if the packages folder has been excluded.  If you want to see those that are ok, add the --verbose command too.  The result may look like this: Fixing missing packages Let us fix a single repo by adding the missing packages structure,  using IFix --fix We first check, then fix, then check again to verify that the gitignore is correct, and that the “packages/” part has been added. If we open up the .gitignore, we see that the block shown below has been added to the end of the .gitignore file.   Comparing and fixing with latest standard Visual Studio gitignore (from github) Now, this tells you if you miss the nuget packages folder, but what about the latest gitignore from github ? You can check for this too, just add the option –merge (why this is named so will be clear later down) So, IFix gitignore --check –merge The result may come out like this  (sorry no colors, not got that far yet here): As you can see, one repo has the latest gitignore (test1), the others are missing either 57 or 150 lines.  IFix has three ways to fix this: --add --merge --replace The options work as follows: Add:  Used to add standard gitignore in the cases where a .gitignore file is missing, and only that, that means it won’t touch other existing gitignores. Merge: Used to merge in the missing lines from the standard into the gitignore file.  If gitignore file is missing, the whole standard will be added. Replace: Used to force a complete replacement of the existing gitignore with the standard one. The Add and Replace options can be used without Fix, which means they will actually do the action. If you combine with --check it will otherwise not touch any files, just do a verification.  So a Merge Check will  tell you if there is any difference between the local gitignore and the standard gitignore, a Compare in effect. When you do a Fix Merge it will combine the local gitignore with the standard, and add what is missing to the end of the local gitignore. It may mean some things may be doubled up if they are spelled a bit differently.  You might also see some extra comments added, but they do no harm. Init new repo with standard gitignore One cool thing is that with a new repo, or a repo that is missing its gitignore, you can grab the latest standard just by using either the Add or the Replace command, both will in effect do the same in this case. So, IFix gitignore --add will add it in, as in the complete example below, where we set up a new git repo and add in the latest standard gitignore: Notes The project is open sourced at github, and you can also report issues there.

    Read the article

  • Why might changes be populated from one NSManagedObjectContext to another without an explicit merge?

    - by Mike Laurence
    I'm working on an object import feature that utilizes multiple threads/NSManagedObjectContexts, using http://www.mac-developer-network.com/columns/coredata/may2009/ as my guide (note that I am developing for iPhone). For some reason, when I save one of my contexts the other is immediately updated with the changes, even though I have commented out my calls to mergeChangesFromContextDidSaveNotification. Are there any reasons the contexts might be merging into one another without an explicit call? Here a log of what's going on: // 1.) Main context is saved with "Peter Gabriel" // 2.) Test context is created, begins with same contents as main context // 3.) Main context is inserted with "Spoon" // 4.) Test context is inserted with "Phoenix" // Contents at this point: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) // 5.) testContext is saved // New contents of contexts: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Phoenix", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) As you can see, the test context is saved midway through, and the main context suddenly has the new objects from the test context, even though I haven't performed the whole NSManagedObjectContextDidSaveNotification/mergeChangesFromContext combo. My understanding is that no changes will ever be merged unless done so explicitly... does anyone know what's going on here?

    Read the article

  • iPad: Merge concept of SplitViewController and NavigationController in RootView?

    - by MikeN
    I'm having trouble merging the two concepts of using a SplitViewController in my main view and having the "RootView" controller that controls the left panes popup/sidebar table view. I want to have the left "RootView" act as a navigation menu, but how do I do this when the RootView is tied through MainWindow.xib into the left pane of the SplitView? Basically, I want the left navigation to work just like the built-in email applications folder drilldown navigation. Is there an example iPad project out there that uses both SplitView and a NavigationView for the left/Root pane?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • LVDiff not working in Git

    - by Tanner
    I'm trying to get lvdiff from meta-diff suite to work with Git. My .gitconfig looks like this: [gui] recentrepo = C:/Users/Tanner/Desktop/FIRST 2010 Beta/Java/LoganRover [user] name = Tanner Smith email = [email protected] [merge "labview"] name = LabVIEW 3-Way Merge driver = 'C:/Program Files/National Instruments/Shared/LabVIEW Merge/LVMerge.exe' 'C:/Program Files/National Instruments/LabVIEW 8.6/LabVIEW.exe' %O %B %A %A recursive = binary [diff "lvdiff"] #command = 'C:/Program Files/meta-diff suite/lvdiff.exe' external = C:/Users/Tanner/Desktop/FIRST 2010 Beta/lvdiff.sh [core] autocrlf = true lvdiff.sh looks like this: #!/bin/sh "C:/Program Files/meta-diff suite/lvdiff.exe" "$2" "%5" | cat And my .gitattributes file looks like this: #Use a cusstom driver to merge LabVIEW files *.vi merge=labview #Use lvdiff as the externel diff program for LabVIEW files *.vi diff=lvdiff But everytime I do a diff, all Git returns is: diff --git a/Build DashBoard Data.vi b/Build DashBoard Data.vi index fd50547..662237f 100644 Binary files a/Build DashBoard Data.vi and b/Build DeashBoard Data.vi differ It is like it is not using it or even recognizing my changes. Any ideas?

    Read the article

  • smarty path problem

    - by ruru
    here is my folder index.php smartyhere -Smarty.class.php admin -index.php -users.php in index.php - $smarty-display('index.tpl'); in admin/index.php - $smarty-display('adminindex.tpl'); got error Smarty error: unable to read resource: "adminindex.tpl" any idea ? thx

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 21 (sys.dm_db_partition_stats)

    - by Tamarick Hill
    The sys.dm_db_partition_stats DMV returns page count and row count information for each table or index within your database. Lets have a quick look at this DMV so we can review some of the results. **NOTE: I am going to create an ‘ObjectName’ column in our result set so that we can more easily identify tables. SELECT object_name(object_id) ObjectName, * FROM sys.dm_db_partition_stats As stated above, the first column in our result set is an Object name based on the object_id column of this result set. The partition_id column refers to the partition_id of the index in question. Each index will have at least 1 unique partition_id and will have more depending on if the object has been partitioned. The index_id column relates back to the sys.indexes table and uniquely identifies an index on a given object. A value of 0 (zero) in this column would indicate the object is a HEAP and a value of 1 (one) would signify the Clustered Index. Next is the partition_number which would signify the number of the partition for a particular object_id. Since none of my tables in my result set have been partitioned, they all display 1 for the partition_number. Next we have the in_row_data_page_count which tells us the number of data pages used to store in-row data for a given index. The in_row_used_page_count is the number of pages used to store and manage the in-row data. If we look at the first row in the result set, we will see we have 700 for this column and 680 for the previous. This means that just to manage the data (not store it) is requiring 20 pages. The next column in_row_reserved_page_count is how many pages have been reserved, regardless if they are being used or not. The next 2 columns are used for storing LOB (Large Object) data which could be text, image, varchar(max), or varbinary(max) columns. The next two columns, row_overflow, represent pages used for data that exceed the 8,060 byte row size limit for the in-row data pages. The next columns used_page_count and reserved_page_count represent the sum of the in_row, lob, and row_overflow columns discussed earlier. Lastly is a row_count column which displays the number of rows that are in a particular index. This DMV is a very powerful resource for identifying page and row count information. By knowing the page counts for indexes within your database, you are able to easily calculate the size of indexes. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms187737.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Basics of a Query Hint

    - by pinaldave
    This blog post is inspired from SQL Architecture Basics Joes 2 Pros: Core Architecture concepts – SQL Exam Prep Series 70-433 – Volume 3. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Basics of a Query Hint – A Primer. In the article we discussed various basics terminology of the query hints. The article further covers following important concepts of query hints. Expecting Seek and getting a Scan Creating an index for improved optimization Implementing the query hint Above three are the most important concepts related to query hint and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have the following query: DECLARE @UlaChoice TinyInt SET @Type = 1 SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice You have a nonclustered index named IX_Legal_Ula on the UlaChoice field. The Primary key is on the ID field and called PK_Legal_ID 99% of the time the value of the @UlaChoice is set to ‘YP101′. What query will achieve the best optimization for this query? SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(X_Legal_Ula)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice WITH(INDEX(PK_Legal_ID)) SELECT * FROM LegalActivity WHERE UlaChoice = @UlaChoice OPTION (Optimize FOR(@UlaChoice = ‘YP101′)) 2) You have the following query: SELECT * FROM CurrentProducts WHERE ShortName = ‘Yoga Trip’ You have a nonclustered index on the ShortName field and the query runs an efficient index seek. You change your query to use a variable for ShortName and now you are using a slow index scan. What query hint can you use to get the same execution time as before? WITH LOCK FAST OPTIMIZE FOR MAXDOP READONLY Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Merge functionality of two xsl files into a single file (not a xsl import or include issue)

    - by anuamb
    I have two xsl files; both of them perform different tasks on source xml one after another. Now I need a single xsl file which will actually perform both these tasks in single file (its not an issue of xsl import or xsl include): say my source xml is: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# <CREDIT_DER/ </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 My first xsl (tr1.xsl) removes all nodes whose value is blank or null: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|node()"> <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template </xsl:stylesheet The output here is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV#+# <EXP_AFT_CONVabc <GUARANTEE_AMOUNT#+# </LVL2 <LVL21 <AZ#+# <BZbz1 <AZaz2 <BZ#+# </LVL21 </R7P1_1 </LIST_R7P1_1 And my second xsl (tr2.xsl) does a global replace (of #+# with text blank'') on the output of first xsl: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:template match="@*|*"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet So my final output is <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ </LVL21 </R7P1_1 </LIST_R7P1_1 My concern is that instead of these two xsl (tr1.xsl and tr2.xsl) I only need a single xsl (tr.xsl) which gives me final output? Say when I combine these two as <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" <xsl:template match="@*|node()" <xsl:if test=". != '' or ./@* != ''"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:if> <xsl:template name="globalReplace" <xsl:param name="outputString"/ <xsl:param name="target"/ <xsl:param name="replacement"/ <xsl:choose <xsl:when test="contains($outputString,$target)"> <xsl:value-of select= "concat(substring-before($outputString,$target), $replacement)"/> <xsl:call-template name="globalReplace"> <xsl:with-param name="outputString" select="substring-after($outputString,$target)"/> <xsl:with-param name="target" select="$target"/> <xsl:with-param name="replacement" select="$replacement"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$outputString"/> </xsl:otherwise> </xsl:choose </xsl:template <xsl:template match="text()" <xsl:call-template name="globalReplace" <xsl:with-param name="outputString" select="."/ <xsl:with-param name="target" select="'#+#'"/ <xsl:with-param name="replacement" select="''"/ </xsl:call-template </xsl:template <xsl:template match="@|" <xsl:copy <xsl:apply-templates select="@*|node()"/ </xsl:copy </xsl:template </xsl:stylesheet it outputs: <LIST_R7P1_1 <R7P1_1 <LVL2 <ORIG_EXP_PRE_CONV <EXP_AFT_CONVabc <GUARANTEE_AMOUNT <CREDIT_DER/ </LVL2 <LVL21 <AZ <BZbz1 <AZaz2 <BZ <CZ/ </LVL21 </R7P1_1 </LIST_R7P1_1 Only replacement is performed but not null/blank node removal.

    Read the article

  • What is the best way to join/merge two tables by column cell matching in Excel?

    - by blunders
    I've found this excel add-in to buy that appears to do what I need, but I'd rather have code that's open to use as I wish. While a GUI is nice, it's not required. In an attempt to make the question more clear, I'm adding some two sample "input" tables in tab delimited form, and the resulting output table: SAMPLE_INPUT_TABLE_01 NAME<tab>Location John<tab>US Mike<tab>CN Tom<tab>CA Sue<tab>RU SAMPLE_INPUT_TABLE_02 NAME<tab>Age John<tab>18 Mike<tab>36 Tom<tab>54 Mary<tab>18 SAMPLE_OUTPUT_TABLE_02 NAME<tab>Age<Location> John<tab>18<tab>US Mike<tab>36<tab>CN Tom<tab>54<tab>CA Sue<tab>""<tab>RU Mary<tab>18<tab>"" If it matters, I'm using Office 2010 on Windows 7.

    Read the article

  • Is it possible to "merge" the values of multiple records into a single field without using a stored

    - by j0rd4n
    A co-worker posed this question to me, and I told them, "No, you'll need to write a sproc for that". But I thought I'd give them a chance and put this out to the community. Essentially, they have a table with keys mapping to multiple values. For a report, they want to aggregate on the key and "mash" all of the values into a single field. Here's a visual: --- ------- Key Value --- ------- 1 A 1 B 1 C 2 X 2 Y The result would be as follows: --- ------- Key Value --- ------- 1 A,B,C 2 X,Y They need this in SQLServer 2005. Again, I think they need to write a stored procedure, but if anyone knows a magic out-of-the-box function that does this, I'd be impressed.

    Read the article

  • Calendar event problems

    - by Marin
    Goodmorning everybody! Can you please help me? I have a problem with this part of the script: $output = cal_top(); switch($action){ case "add": include("includes/event.php"); $output .= cal_event_form('add'); break; case "delete": include("includes/delete.php"); include('includes/viewdate.php'); $del_error = cal_del(); if($del_error!="") $output .= "<center><span class='failure'>$del_error</span></center><br>"; $output .= cal_display(); break; case "modify": include("includes/event.php"); $output .= cal_event_form('modify'); break; case "viewdate": include("includes/viewdate.php"); $output .= cal_display(); break; case "viewevent": include("includes/viewevent.php"); $output .= cal_display(); break; case "search": include("includes/search.php"); $output .= cal_search_form(); break; case "submitevent": include('includes/eventsub.php'); include('includes/viewdate.php'); $sub_error = cal_submit_event(); if($sub_error!="") $output .= "<center><span class='failure'>$sub_error</span></center><br>"; $output .= cal_display(); $_SESSION['cal_action'] = "viewdate"; break; case "admin": include('includes/admin.php'); $output .= cal_adminsection(); break; case "login": $_SESSION['cal_noautologin'] = 1; include('includes/login.php'); $output .= cal_login_page(); break; case "logout": cal_logout(); $_SESSION['cal_noautologin'] = 1; cal_clear_permissions(); cal_load_permissions(); It shows me this errors: Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 145 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 149 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 156 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 160 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 164 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 168 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 172 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 180 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 184 Notice: Undefined variable: action in C:\wamp\www\ReeceCalendar_0.9\cal\index.php on line 189 Your help could be very helpful for me!Please Help me;)Thank you.

    Read the article

  • What is wrong with my .Htaccess file? I'm trying to redirect permanently my whole site to the index.

    - by SocialAddict
    This is giving me a 500 internal server error. Any suggestions? I have tried various examples but I think I'm missing something... RewriteEngine On RewriteCond %{request_uri}!^/index\.htm RewriteRule ^(.*)$ http://www.thedomain.co.uk [R=permanent,L] It displays the homepage if I navigate there but anything that meets the conditions (all appart from index.htm gives the server 500)

    Read the article

  • Mercurial How To Merge 2 Repositories that share a common ancestor but are not clones of the same re

    - by sylvanaar
    I am using hg-subversion, and I have 2 different hg repositories one from our svn trunk, and one from a branch of the trunk. I would like to link them somehow. At some point in the history both Hg repositories will be identical is there some way to join them? In other words is there a way to relate the repositories from within Hg? The technique I am currently using is to just export the second repository over top of the working copy of the revision they share, and then commit that working copy as a branch in Hg, but I lose the history this way. Any advice would be great

    Read the article

  • crazy asp.net error

    - by dominic
    Hi I am having a problem debugging an issue on a website. Everything works locally, the local and server databases are the same The strange thing about the error is that it points to my local dev machine in the error stack. Is that crazy or what, The files are published and being hosted on a server machine and the error is pointing to a line of code on my local dev box. I feel like I am losing the plot. Can someone pls help be clear the air here because this is very weird Error in '/' Application. Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index] System.Collections.ArrayList.get_Item(Int32 index) +10066148 System.Collections.Specialized.NameObjectCollectionBase.BaseGet(Int32 index) +17 System.Web.HttpFileCollection.get_Item(Int32 index) +9 System.Web.HttpFileCollectionWrapper.get_Item(Int32 index) +18 PitchPortal.Web.Binders.DocumentModelBinder.ValidateAndAssignPostedFile(ControllerContext controllerContext, ModelBindingContext bindingContext, Document doc) in C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs:73 PitchPortal.Web.Binders.DocumentModelBinder.BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) in C:\Users\Bich Vu\Documents\Visual Studio 2008\Projects\PitchPortal\PitchPortal.Web\Binders\DocumentModelBinder.cs:45 System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +404 System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +140 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +658084 System.Web.Mvc.Controller.ExecuteCore() +125 System.Web.Mvc.<c_DisplayClass8.b_4() +48 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +15 System.Web.Mvc.Async.WrappedAsyncResult1.End() +85 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +51 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +454 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +263

    Read the article

  • how do i scroll through 100 photos in UIScrollView in IPhone

    - by mwangima
    I'm trying to scroll through images being downloaded from a users online album (like in the facebook iphone app) since i can't load all images into memory, i'm loading 3 at a time (prev,current & next). then removing image(prev-1) & image (next +1) from the uiscroller subviews. my logic works fine in the simulator but fails in the device with this error: [CALayer retain]: message sent to deallocated instance what could be the problem below is my code sample - (void)scrollViewDidEndDecelerating:(UIScrollView *)_scrollView { pageControlIsChangingPage = NO; CGFloat pageWidth = _scrollView.frame.size.width; int page = floor((_scrollView.contentOffset.x - pageWidth / 2) / pageWidth) + 1; if (page1 && page<=(pageControl.numberOfPages-3)) { [self removeThisView:(page-2)]; [self removeThisView:(page+2)]; } if(page0) { NSLog(@"<< PREVIOUS"); [self showPhoto:(page-1)]; } [self showPhoto:page]; if(page<(pageControl.numberOfPages-1)) { //NSLog(@"NEXT "); [self showPhoto:page+1]; NSLog(@"FINISHED LOADING NEXT "); } } -(void) showPhoto:(NSInteger)index { CGFloat cx = scrollView.frame.size.width*index; CGFloat cy = 40; CGRect rect=CGRectMake( 0, 0,320, 480); rect.origin.x = cx; rect.origin.y = cy; AsyncImageView* asyncImage = [[AsyncImageView alloc] initWithFrame:rect]; asyncImage.tag = 999; NSURL *url = [NSURL URLWithString:[pics objectAtIndex:index]]; [asyncImage loadImageFromURL:url place:CGRectMake(150, 190, 30, 30) member:memberid isSlide:@"Yes" picId:[picsIds objectAtIndex:index]]; [scrollView addSubview:asyncImage]; [asyncImage release]; } -(void) removeThisView:(NSInteger)index { if(index<[[scrollView subviews] count] && [[scrollView subviews] objectAtIndex:index]!=nil){ if ([[[scrollView subviews] objectAtIndex:index] isKindOfClass:[AsyncImageView class]] || [[[scrollView subviews] objectAtIndex:index] isKindOfClass:[UIImageView class]]) { [[[scrollView subviews] objectAtIndex:index] removeFromSuperview]; } } } For the record it works OK in the simulator, but not the iphone device itself. any ideas will be appreciated. cheers, fred.

    Read the article

  • Making a rewriterule remove .php extension?

    - by Sam
    Hello, I have come up with a rewriterule to to go to any page on my website without typing in the .php extension because it is automatically added to the url. The rule is: RewriteRule ^(\w+)/?$ /$1.php It takes anything you type in my index and adds .php to it, so you can put in http://sampardee.com/index and it pulls up index.php Now my question is how to detect when a user enters http://sampardee.com/index.php and change it to http://sampardee.com/index How could I do so with a rewriterule?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >