Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 675/750 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • Shadow mapping with deffered shading for directional lights - shadow map projection problem

    - by Harry
    I'm trying to implement shadow mapping to my engine. I started with directional lights because they seemed to be the easiest one, but I was wrong :) I have implemented deferred shading and I retrieve position from depth. I think that there is the biggest problem but code looks ok for me. Now more about problem: Shadow map projected onto meshes looks bad scaled and translated and also some informations from shadow map texture aren't visible. You can see it on this screen: http://img5.imageshack.us/img5/2254/93dn.png Yelow frustum is light frustum and I have mixed shadow map preview and actual scene. As you can see shadows are in wrong place and shadow of cone and sphere aren't visible. Could you look at my codes and tell me where I have a mistake? // create shadow map if(!_shd)glGenTextures(1, &_shd); glBindTexture(GL_TEXTURE_2D, _shd); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL); // shadow map size glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _shd, 0); glDrawBuffer(GL_NONE); // setting camera Vector dire=Vector(0,0,1); ACamera.setLookAt(dire,Vector(0)); ACamera.setPerspectiveView(60.0f,1,0.1f,10.0f); // currently needed for proper frustum corners calculation Vector min(ACamera._point[0]),max(ACamera._point[0]); for(int i=0;i<8;i++){ max=Max(max,ACamera._point[i]); min=Min(min,ACamera._point[i]); } ACamera.setOrthogonalView(min.x,max.x,min.y,max.y,-max.z,-min.z); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _s_buffer); // framebuffer for shadow map // rendering to depth buffer glBindFramebuffer(GL_DRAW_FRAMEBUFFER, _g_buffer); Shaders["DirLight"].set(true); Matrix4 bias; bias.x.set(0.5,0.0,0.0,0.0); bias.y.set(0.0,0.5,0.0,0.0); bias.z.set(0.0,0.0,0.5,0.0); bias.w.set(0.5,0.5,0.5,1.0); Shaders["DirLight"].set("textureMatrix",ACamera.matrix*Projection3D*bias); // order of multiplications are 100% correct, everything gives mi the same result as using glm glActiveTexture(GL_TEXTURE5); glBindTexture(GL_TEXTURE_2D,_shd); lightDir(dir); // light calculations Vertex Shader makes nothing related to shadow calculatons Pixel shader function which calculates if pixel is in shadow or not: float readShadowMap(vec3 eyeDir) { // retrieve depth of pixel float z = texture2D(depth, gl_FragCoord.xy/screen).z; vec3 pos = vec3(gl_FragCoord.xy/screen, z); // transform by the projection and view inverse vec4 worldSpace = inverse(View)*inverse(ProjectionMatrix)*vec4(pos*2-1,1); worldSpace /= worldSpace.w; vec4 coord=textureMatrix*worldSpace; float vis=1.0f; if(texture2D(shadow, coord.xy).z < coord.z-0.001)vis=0.2f; return vis; } I also have question about shadows specifically for directional light. Currently I always look at 0,0,0 position and in further implementation I have to move light frustum along to camera frustum. I've found how to do this here: http://www.gamedev.net/topic/505893-orthographic-projection-for-shadow-mapping/ but it doesn't give me what I want. Maybe because of problems mentioned above, but I want know your opinion. EDIT: vec4 worldSpace is position read from depht of the scene (not shadow map). Maybe I wasn't precise so I'll try quick explain what is what: View is camera view matrix, ProjectionMatrix is camera projection,. First I try to get world space position from depth map and then multiply it by textureMatrix which is light view *light projection*bias. Rest of code is the same as in many tutorials. I can't use vertex shader to make something like gl_Position=textureMatrix*gl_Vertex and get it interpolated in fragment shader because of deffered rendering use so I want get it from depht buffer. EDIT2: I also tried make it as in Coding Labs tutorial about Shadow Mapping with Deferred Rendering but unfortunately this either works wrong.

    Read the article

  • How to rotate a set of points on z = 0 plane in 3-D, preserving pairwise distances?

    - by cagirici
    I have a set of points double n[] on the plane z = 0. And I have another set of points double[] m on the plane ax + by + cz + d = 0. Length of n is equal to length of m. Also, euclidean distance between n[i] and n[j] is equal to euclidean distance between m[i] and m[j]. I want to rotate n[] in 3-D, such that for all i, n[i] = m[i] would be true. In other words, I want to turn a plane into another plane, preserving the pairwise distances. Here's my code in java. But it does not help so much: double[] rotate(double[] point, double[] currentEquation, double[] targetEquation) { double[] currentNormal = new double[]{currentEquation[0], currentEquation[1], currentEquation[2]}; double[] targetNormal = new double[]{targetEquation[0], targetEquation[1], targetEquation[2]}; targetNormal = normalize(targetNormal); double angle = angleBetween(currentNormal, targetNormal); double[] axis = cross(targetNormal, currentNormal); double[][] R = getRotationMatrix(axis, angle); return rotated; } double[][] getRotationMatrix(double[] axis, double angle) { axis = normalize(axis); double cA = (float)Math.cos(angle); double sA = (float)Math.sin(angle); Matrix I = Matrix.identity(3, 3); Matrix a = new Matrix(axis, 3); Matrix aT = a.transpose(); Matrix a2 = a.times(aT); double[][] B = { {0, axis[2], -1*axis[1]}, {-1*axis[2], 0, axis[0]}, {axis[1], -1*axis[0], 0} }; Matrix A = new Matrix(B); Matrix R = I.minus(a2); R = R.times(cA); R = R.plus(a2); R = R.plus(A.times(sA)); return R.getArray(); } This is what I get. The point set on the right side is actually part of a point set on the left side. But they are on another plane. Here's a 2-D representation of what I try to do: There are two lines. The line on the bottom is the line I have. The line on the top is the target line. The distances are preserved (a, b and c). Edit: I have tried both methods written in answers. They both fail (I guess). Method of Martijn Courteaux public static double[][] getRotationMatrix(double[] v0, double[] v1, double[] v2, double[] u0, double[] u1, double[] u2) { RealMatrix M1 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*v0[0]}, {0,1,0,-1*v0[1]}, {0,0,1,0}, {0,0,0,1} }); RealMatrix M2 = new Array2DRowRealMatrix(new double[][]{ {1,0,0,-1*u0[0]}, {0,1,0,-1*u0[1]}, {0,0,1,-1*u0[2]}, {0,0,0,1} }); Vector3D imX = new Vector3D((v0[1] - v1[1])*(u2[0] - u0[0]) - (v0[1] - v2[1])*(u1[0] - u0[0]), (v0[1] - v1[1])*(u2[1] - u0[1]) - (v0[1] - v2[1])*(u1[1] - u0[1]), (v0[1] - v1[1])*(u2[2] - u0[2]) - (v0[1] - v2[1])*(u1[2] - u0[2]) ).scalarMultiply(1/((v0[0]*v1[1])-(v0[0]*v2[1])-(v1[0]*v0[1])+(v1[0]*v2[1])+(v2[0]*v0[1])-(v2[0]*v1[1]))); Vector3D imZ = new Vector3D(findEquation(u0, u1, u2)); Vector3D imY = Vector3D.crossProduct(imZ, imX); double[] imXn = imX.normalize().toArray(); double[] imYn = imY.normalize().toArray(); double[] imZn = imZ.normalize().toArray(); RealMatrix M = new Array2DRowRealMatrix(new double[][]{ {imXn[0], imXn[1], imXn[2], 0}, {imYn[0], imYn[1], imYn[2], 0}, {imZn[0], imZn[1], imZn[2], 0}, {0, 0, 0, 1} }); RealMatrix rotationMatrix = MatrixUtils.inverse(M2).multiply(M).multiply(M1); return rotationMatrix.getData(); } Method of Sam Hocevar static double[][] makeMatrix(double[] p1, double[] p2, double[] p3) { double[] v1 = normalize(difference(p2,p1)); double[] v2 = normalize(cross(difference(p3,p1), difference(p2,p1))); double[] v3 = cross(v1, v2); double[][] M = { { v1[0], v2[0], v3[0], p1[0] }, { v1[1], v2[1], v3[1], p1[1] }, { v1[2], v2[2], v3[2], p1[2] }, { 0.0, 0.0, 0.0, 1.0 } }; return M; } static double[][] createTransform(double[] A, double[] B, double[] C, double[] P, double[] Q, double[] R) { RealMatrix c = new Array2DRowRealMatrix(makeMatrix(A,B,C)); RealMatrix t = new Array2DRowRealMatrix(makeMatrix(P,Q,R)); return MatrixUtils.inverse(c).multiply(t).getData(); } The blue points are the calculated points. The black lines indicate the offset from the real position.

    Read the article

  • Man pages not finding entry

    - by Mike
    So, I'm not sure what is going on with my system (ubuntu 12.04), but my man pages do not seem to be working. I try man gcc and get the following response No manual entry for gcc See 'man 7 undocumented' for help when manual pages are not available. However I see the man entry in /usr/share/man/man1/gcc.1.gz Here is what my /etc/manpath.config file looks like # manpath.config # # This file is used by the man-db package to configure the man and cat paths. # It is also used to provide a manpath for those without one by examining # their PATH environment variable. For details see the manpath(5) man page. # # Lines beginning with `#' are comments and are ignored. Any combination of # tabs or spaces may be used as `whitespace' separators. # # There are three mappings allowed in this file: # -------------------------------------------------------- # MANDATORY_MANPATH manpath_element # MANPATH_MAP path_element manpath_element # MANDB_MAP global_manpath [relative_catpath] #--------------------------------------------------------- # every automatically generated MANPATH includes these fields # #MANDATORY_MANPATH /usr/src/pvm3/man # MANDATORY_MANPATH /usr/man MANDATORY_MANPATH /usr/share/man MANDATORY_MANPATH /usr/local/share/man #--------------------------------------------------------- # set up PATH to MANPATH mapping # ie. what man tree holds man pages for what binary directory. # # *PATH* -> *MANPATH* # MANPATH_MAP /bin /usr/share/man MANPATH_MAP /usr/bin /usr/share/man MANPATH_MAP /sbin /usr/share/man MANPATH_MAP /usr/sbin /usr/share/man MANPATH_MAP /usr/local/bin /usr/local/man MANPATH_MAP /usr/local/bin /usr/local/share/man MANPATH_MAP /usr/local/sbin /usr/local/man MANPATH_MAP /usr/local/sbin /usr/local/share/man MANPATH_MAP /usr/X11R6/bin /usr/X11R6/man MANPATH_MAP /usr/bin/X11 /usr/X11R6/man MANPATH_MAP /usr/games /usr/share/man MANPATH_MAP /opt/bin /opt/man MANPATH_MAP /opt/sbin /opt/man #--------------------------------------------------------- # For a manpath element to be treated as a system manpath (as most of those # above should normally be), it must be mentioned below. Each line may have # an optional extra string indicating the catpath associated with the # manpath. If no catpath string is used, the catpath will default to the # given manpath. # # You *must* provide all system manpaths, including manpaths for alternate # operating systems, locale specific manpaths, and combinations of both, if # they exist, otherwise the permissions of the user running man/mandb will # be used to manipulate the manual pages. Also, mandb will not initialise # the database cache for any manpaths not mentioned below unless explicitly # requested to do so. # # In a per-user configuration file, this directive only controls the # location of catpaths and the creation of database caches; it has no effect # on privileges. # # Any manpaths that are subdirectories of other manpaths must be mentioned # *before* the containing manpath. E.g. /usr/man/preformat must be listed # before /usr/man. # # *MANPATH* -> *CATPATH* # MANDB_MAP /usr/man /var/cache/man/fsstnd MANDB_MAP /usr/share/man /var/cache/man MANDB_MAP /usr/local/man /var/cache/man/oldlocal MANDB_MAP /usr/local/share/man /var/cache/man/local MANDB_MAP /usr/X11R6/man /var/cache/man/X11R6 MANDB_MAP /opt/man /var/cache/man/opt # #--------------------------------------------------------- # Program definitions. These are commented out by default as the value # of the definition is already the default. To change: uncomment a # definition and modify it. # #DEFINE pager pager -s #DEFINE cat cat #DEFINE tr tr '\255\267\264\327' '\055\157\047\170' #DEFINE grep grep #DEFINE troff groff -mandoc #DEFINE nroff nroff -mandoc #DEFINE eqn eqn #DEFINE neqn neqn #DEFINE tbl tbl #DEFINE col col #DEFINE vgrind vgrind #DEFINE refer refer #DEFINE grap grap #DEFINE pic pic -S # #DEFINE compressor gzip -c7 #--------------------------------------------------------- # Misc definitions: same as program definitions above. # #DEFINE whatis_grep_flags -i #DEFINE apropos_grep_flags -iEw #DEFINE apropos_regex_grep_flags -iE #--------------------------------------------------------- # Section names. Manual sections will be searched in the order listed here; # the default is 1, n, l, 8, 3, 0, 2, 5, 4, 9, 6, 7. Multiple SECTION # directives may be given for clarity, and will be concatenated together in # the expected way. # If a particular extension is not in this list (say, 1mh), it will be # displayed with the rest of the section it belongs to. The effect of this # is that you only need to explicitly list extensions if you want to force a # particular order. Sections with extensions should usually be adjacent to # their main section (e.g. "1 1mh 8 ..."). # SECTION 1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7 # #--------------------------------------------------------- # Range of terminal widths permitted when displaying cat pages. If the # terminal falls outside this range, cat pages will not be created (if # missing) or displayed. # #MINCATWIDTH 80 #MAXCATWIDTH 80 # # If CATWIDTH is set to a non-zero number, cat pages will always be # formatted for a terminal of the given width, regardless of the width of # the terminal actually being used. This should generally be within the # range set by MINCATWIDTH and MAXCATWIDTH. # #CATWIDTH 0 # #--------------------------------------------------------- # Flags. # NOCACHE keeps man from creating cat pages. #NOCACHE Thanks for any help (p.s. even 'man man' fails) Edit: When I run ls -l /usr/share/man/man1/gcc* I get the following output lrwxrwxrwx 1 root root 12 May 27 15:41 /usr/share/man/man1/gcc.1.gz -> gcc-4.6.1.gz -rw-r--r-- 1 root root 217776 Apr 15 17:34 /usr/share/man/man1/gcc-4.6.1.gz

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • Nesting Linq-to-Objects query within Linq-to-Entities query –what is happening under the covers?

    - by carewithl
    var numbers = new int[] { 1, 2, 3, 4, 5 }; var contacts = from c in context.Contacts where c.ContactID == numbers.Max() | c.ContactID == numbers.FirstOrDefault() select c; foreach (var item in contacts) Console.WriteLine(item.ContactID); Linq-to-Entities query is first translated into Linq expression tree, which is then converted by Object Services into command tree. And if Linq-to-Entities query nests Linq-to-Objects query, then this nested query also gets translated into an expression tree. a) I assume none of the operators of the nested Linq-to-Objects query actually get executed, but instead data provider for particular DB (or perhaps Object Services) knows how to transform the logic of Linq-to-Objects operators into appropriate SQL statements? b) Data provider knows how to create equivalent SQL statements only for some of the Linq-to-Objects operators? c) Similarly, data provider knows how to create equivalent SQL statements only for some of the non-Linq methods in the Net Framework class library? EDIT: I know only some Sql so I can't be completely sure, but reading Sql query generated for the above code it seems data provider didn't actually execute numbers.Max method, but instead just somehow figured out that numbers.Max should return the maximum value and then proceed to include in generated Sql query a call to TSQL's build-in MAX function. It also put all the values held by numbers array into a Sql query. SELECT CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN '0X0X' ELSE '0X1X' END AS [C1], [Extent1].[ContactID] AS [ContactID], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[Title] AS [Title], [Extent1].[AddDate] AS [AddDate], [Extent1].[ModifiedDate] AS [ModifiedDate], [Extent1].[RowVersion] AS [RowVersion], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[CustomerTypeID] END AS [C2], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[InitialDate] END AS [C3], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryDesintation] END AS [C4], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryDestination] END AS [C5], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryActivity] END AS [C6], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryActivity] END AS [C7], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[Notes] END AS [C8], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[RowVersion] END AS [C9], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[BirthDate] END AS [C10], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[HeightInches] END AS [C11], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[WeightPounds] END AS [C12], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[DietaryRestrictions] END AS [C13] FROM [dbo].[Contact] AS [Extent1] LEFT OUTER JOIN (SELECT [Extent2].[ContactID] AS [ContactID], [Extent2].[BirthDate] AS [BirthDate], [Extent2].[HeightInches] AS [HeightInches], [Extent2].[WeightPounds] AS [WeightPounds], [Extent2].[DietaryRestrictions] AS [DietaryRestrictions], [Extent3].[CustomerTypeID] AS [CustomerTypeID], [Extent3].[InitialDate] AS [InitialDate], [Extent3].[PrimaryDesintation] AS [PrimaryDesintation], [Extent3].[SecondaryDestination] AS [SecondaryDestination], [Extent3].[PrimaryActivity] AS [PrimaryActivity], [Extent3].[SecondaryActivity] AS [SecondaryActivity], [Extent3].[Notes] AS [Notes], [Extent3].[RowVersion] AS [RowVersion], cast(1 as bit) AS [C1] FROM [dbo].[ContactPersonalInfo] AS [Extent2] INNER JOIN [dbo].[Customers] AS [Extent3] ON [Extent2].[ContactID] = [Extent3].[ContactID]) AS [Project1] ON [Extent1].[ContactID] = [Project1].[ContactID] LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll3].[C1] AS [C1] FROM (SELECT [UnionAll2].[C1] AS [C1] FROM (SELECT [UnionAll1].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable1] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable2]) AS [UnionAll1] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable3]) AS [UnionAll2] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable4]) AS [UnionAll3] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable5]) AS [c]) AS [Limit1] ON 1 = 1 LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll7].[C1] AS [C1] FROM (SELECT [UnionAll6].[C1] AS [C1] FROM (SELECT [UnionAll5].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable6] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable7]) AS [UnionAll5] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable8]) AS [UnionAll6] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable9]) AS [UnionAll7] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable10]) AS [c]) AS [Limit2] ON 1 = 1 CROSS JOIN (SELECT MAX([UnionAll12].[C1]) AS [A1] FROM (SELECT [UnionAll11].[C1] AS [C1] FROM (SELECT [UnionAll10].[C1] AS [C1] FROM (SELECT [UnionAll9].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable11] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable12]) AS [UnionAll9] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable13]) AS [UnionAll10] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable14]) AS [UnionAll11] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable15]) AS [UnionAll12]) AS [GroupBy1] WHERE [Extent1].[ContactID] IN ([GroupBy1].[A1], (CASE WHEN ([Limit1].[C1] IS NULL) THEN 0 ELSE [Limit2].[C1] END)) Based on this, is it possible that Linq2Entities provider indeed doesn't execute non-Linq and Linq-to-Object methods, but instead creates equivalent SQL statements for some of them ( and for others it throws an exception )? Thank you in advance

    Read the article

  • Create Advanced Panoramas with Microsoft Image Composite Editor

    - by Matthew Guay
    Do you enjoy making panoramas with your pictures, but want more features than tools like Live Photo Gallery offer?  Here’s how you can create amazing panoramas for free with the Microsoft Image Composite Editor. Yesterday we took a look at creating panoramic photos in Windows Live Photo Gallery. Today we take a look at a free tool from Microsoft that will give you more advanced features to create your own masterpiece. Getting Started Download Microsoft Image Composite Editor from Microsoft Research (link below), and install as normal.  Note that there are separate version for 32 & 64-bit editions of Windows, so make sure to download the correct one for your computer. Once it’s installed, you can proceed to create awesome panoramas and extremely large image combinations with it.  Microsoft Image Composite Editor integrates with Live Photo Gallery, so you can create more advanced panoramic pictures directly.  Select the pictures you want to combine, click Extras in the menu bar, and select Create Image Composite. You can also create a photo stitch directly from Explorer.  Select the pictures you want to combine, right-click, and select Stitch Images… Or, simply launch the Image Composite Editor itself and drag your pictures into its editor.  Either way you start a image composition, the program will automatically analyze and combine your images.  This application is optimized for multiple cores, and we found it much faster than other panorama tools such as Live Photo Gallery. Within seconds, you’ll see your panorama in the top preview pane. From the bottom of the window, you can choose a different camera motion which will change how the program stitches the pictures together.  You can also quickly crop the picture to the size you want, or use Automatic Crop to have the program select the maximum area with a continuous picture.   Here’s how our panorama looked when we switched the Camera Motion to Planar Motion 2. But, the real tweaking comes in when you adjust the panorama’s projection and orientation.  Click the box button at the top to change these settings. The panorama is now overlaid with a grid, and you can drag the corners and edges of the panorama to change its shape. Or, from the Projection button at the top, you can choose different projection modes. Here we’ve chosen Cylinder (Vertical), which entirely removed the warp on the walls in the image.  You can pan around the image, and get the part you find most important in the center.  Click the Apply button on the top when you’re finished making changes, or click Revert if you want to switch to the default view settings. Once you’ve finished your masterpiece, you can export it easily to common photo formats from the Export panel on the bottom.  You can choose to scale the image or set it to a maximum width and height as well.  Click Export to disk to save the photo to your computer, or select Publish to Photosynth to post your panorama online. Alternately, from the File menu you can choose to save the panorama as .spj file.  This preserves all of your settings in the Image Composite Editor so you can edit it more in the future if you wish.   Conclusion Whether you’re trying to capture the inside of a building or a tall tree, the extra tools in Microsoft Image Composite Editor let you make nicer panoramas than you ever thought possible.  We found the final results surprisingly accurate to the real buildings and objects, especially after tweaking the projection modes.  This tool can be both fun and useful, so give it a try and let us know what you’ve found it useful for. Works with 32 & 64-bit versions of XP, Vista, and Windows 7 Link Download Microsoft Image Composite Editor Similar Articles Productive Geek Tips Change or Set the Greasemonkey Script Editor in FirefoxNew Vista Syntax for Opening Control Panel Items from the Command-lineTune Your ClearType Font Settings in Windows VistaChange the Default Editor From Nano on Ubuntu LinuxMake MSE Create a Restore Point Before Cleaning Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Why would GLCapabilities.setHardwareAccelerated(true/false) have no effect on performance?

    - by Luke
    I've got a JOGL application in which I am rendering 1 million textures (all the same texture) and 1 million lines between those textures. Basically it's a ball-and-stick graph. I am storing the vertices in a vertex array on the card and referencing them via index arrays, which are also stored on the card. Each pass through the draw loop I am basically doing this: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_LINES, <size>, GL.GL_UNSIGNED_INT, 0); I noticed that the JOGL library is pegging one of my CPU cores. Every frame, the run method internal to the library is taking quite long. I'm not sure why this is happening since I have called setHardwareAccelerated(true) on the GLCapabilities used to create my canvas. What's more interesting is that I changed it to setHardwareAccelerated(false) and there was no impact on the performance at all. Is it possible that my code is not using hardware rendering even when it is set to true? Is there any way to check? EDIT: As suggested, I have tested breaking my calls up into smaller chunks. I have tried using glDrawRangeElements and respecting the limits that it requests. All of these simply resulted in the same pegged CPU usage and worse framerates. I have also narrowed the problem down to a simpler example where I just render 4 million textures (no lines). The draw loop then just doing this: gl.glEnableClientState(GL.GL_VERTEX_ARRAY); gl.glEnableClientState(GL.GL_INDEX_ARRAY); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glMatrixMode(GL.GL_MODELVIEW); gl.glLoadIdentity(); <... Camera and transform related code ...> gl.glEnableVertexAttribArray(0); gl.glEnable(GL.GL_TEXTURE_2D); gl.glAlphaFunc(GL.GL_GREATER, ALPHA_TEST_LIMIT); gl.glEnable(GL.GL_ALPHA_TEST); <... Bind texture ...> gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glDrawElements(GL.GL_POINTS, <size>, GL.GL_UNSIGNED_INT, 0); gl.glDisable(GL.GL_TEXTURE_2D); gl.glDisable(GL.GL_ALPHA_TEST); gl.glDisableVertexAttribArray(0); gl.glFlush(); Where the first buffer contains 12 million floats (the x,y,z coords of the 4 million textures) and the second (element) buffer contains 4 million integers. In this simple example it is simply the integers 0 through 3999999. I really want to know what is being done in software that is pegging my CPU, and how I can make it stop (if I can). My buffers are generated by the following code: gl.glBindBuffer(GL.GL_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_FLOAT, <buffer>, GL.GL_STATIC_DRAW); gl.glVertexAttribPointer(0, 3, GL.GL_FLOAT, false, 0, 0); and: gl.glBindBuffer(GL.GL_ELEMENT_ARRAY_BUFFER, <buffer id>); gl.glBufferData(GL.GL_ELEMENT_ARRAY_BUFFER, <size> * BufferUtil.SIZEOF_INT, <buffer>, GL.GL_STATIC_DRAW); ADDITIONAL INFO: Here is my initialization code: gl.setSwapInterval(1); //Also tried 0 gl.glShadeModel(GL.GL_SMOOTH); gl.glClearDepth(1.0f); gl.glEnable(GL.GL_DEPTH_TEST); gl.glDepthFunc(GL.GL_LESS); gl.glHint(GL.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_FASTEST); gl.glPointParameterfv(GL.GL_POINT_DISTANCE_ATTENUATION, POINT_DISTANCE_ATTENUATION, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MIN, MIN_POINT_SIZE, 0); gl.glPointParameterfv(GL.GL_POINT_SIZE_MAX, MAX_POINT_SIZE, 0); gl.glPointSize(POINT_SIZE); gl.glTexEnvf(GL.GL_POINT_SPRITE, GL.GL_COORD_REPLACE, GL.GL_TRUE); gl.glEnable(GL.GL_POINT_SPRITE); gl.glClearColor(clearColor.getX(), clearColor.getY(), clearColor.getZ(), 0.0f); Also, I'm not sure if this helps or not, but when I drag the entire graph off the screen, the FPS shoots back up and the CPU usage falls to 0%. This seems obvious and intuitive to me, but I thought that might give a hint to someone else.

    Read the article

  • Weird y offset when using custom frag shader (Cocos2d-x)

    - by Mister Guacamole
    I'm trying to mask a sprite so I wrote a simple fragment shader that renders only the pixels that are not hidden under another texture (the mask). The problem is that it seems my texture has its y-coordinate offset after passing through the shader. This is the init method of the sprite (GroundZone) I want to mask: bool GroundZone::initWithSize(Size size) { // [...] // Setup the mask of the sprite m_mask = RenderTexture::create(textureWidth, textureHeight); m_mask->retain(); m_mask->setKeepMatrix(true); Texture2D *maskTexture = m_mask->getSprite()->getTexture(); maskTexture->setAliasTexParameters(); // Disable linear interpolation on the mask // Load the custom frag shader with a default vert shader as the sprite’s program FileUtils *fileUtils = FileUtils::getInstance(); string vertexSource = ccPositionTextureA8Color_vert; string fragmentSource = fileUtils->getStringFromFile( fileUtils->fullPathForFilename("CustomShader_AlphaMask_frag.fsh")); GLProgram *shader = new GLProgram; shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str()); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION); shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS); shader->link(); CHECK_GL_ERROR_DEBUG(); shader->updateUniforms(); CHECK_GL_ERROR_DEBUG(); int maskTexUniformLoc = shader->getUniformLocationForName("u_alphaMaskTexture"); shader->setUniformLocationWith1i(maskTexUniformLoc, 1); this->setShaderProgram(shader); shader->release(); // [...] } These are the custom drawing methods for actually drawing the mask over the sprite: You need to know that m_mask is modified externally by another class, the onDraw() method only render it. void GroundZone::draw(Renderer *renderer, const kmMat4 &transform, bool transformUpdated) { m_renderCommand.init(_globalZOrder); m_renderCommand.func = CC_CALLBACK_0(GroundZone::onDraw, this, transform, transformUpdated); renderer->addCommand(&m_renderCommand); Sprite::draw(renderer, transform, transformUpdated); } void GroundZone::onDraw(const kmMat4 &transform, bool transformUpdated) { GLProgram *shader = this->getShaderProgram(); shader->use(); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, m_mask->getSprite()->getTexture()->getName()); glActiveTexture(GL_TEXTURE0); } Below is the method (located in another class, GroundLayer) that modify the mask by drawing a line from point start to point end. Both points are in Cocos2d coordinates (Point (0,0) is down-left). void GroundLayer::drawTunnel(Point start, Point end) { // To dig a line, we need first to get the texture of the zone we will be digging into. Then we get the // relative position of the start and end point in the zone's node space. Finally we use the custom shader to // draw a mask over the existing texture. for (auto it = _children.begin(); it != _children.end(); it++) { GroundZone *zone = static_cast<GroundZone *>(*it); Point nodeStart = zone->convertToNodeSpace(start); Point nodeEnd = zone->convertToNodeSpace(end); // Now that we have our two points converted to node space, it's easy to draw a mask that contains a line // going from the start point to the end point and that is then applied over the current texture. Size groundZoneSize = zone->getContentSize(); RenderTexture *rt = zone->getMask(); rt->begin(); { // Draw a line going from start and going to end in the texture, the line will act as a mask over the // existing texture DrawNode *line = DrawNode::create(); line->retain(); line->drawSegment(nodeStart, nodeEnd, 20, Color4F::RED); line->visit(); } rt->end(); } } Finally, here's the custom shader I wrote. #ifdef GL_ES precision mediump float; #endif varying vec2 v_texCoord; uniform sampler2D u_texture; uniform sampler2D u_alphaMaskTexture; void main() { float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a; float texAlpha = texture2D(u_texture, v_texCoord).a; float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is invisible vec3 texColor = texture2D(u_texture, v_texCoord).rgb; gl_FragColor = vec4(texColor, blendAlpha); return; } I got a problem with the y coordinates. Indeed, it seems that once it has passed through my custom shader, the sprite's texture is not at the right place: Without custom shader (the sprite is the brown thing): With custom shader: What's going on here? Thanks :) EDIT It looks like after passing through the shader when I set the position of the sprite I set it in points, with (0,0) being in the top-right. Indeed, when I do sprite->setPosition(320, 480), the sprite is perfectly placed at the top of the screen.

    Read the article

  • Common usecases and techniques when integrating a 3rd party application with Oracle Sales Cloud

    - by asantaga
    Over the last year or so I've see a lot of partners migrating and integrate their applications with Oracle Sales Cloud. Interestingly I'd say 60% of the partners use the same set of design patterns over and over again. Most of the time I see that they want to embed their application into Oracle Sales Cloud, within a tab usually, perhaps click on a link to their application (passing some piece of data + credentials) and then within their application update sales cloud again using webservices. Here are some examples of the different use-cases I've seen , and how partners are embedding their applications into Sales Cloud, NB : The following examples use the "Desktop" User Interface rather than the Newer "Simplified User Interface", I'll update the sample application soon but the integration patterns are precisely the same Use Case 1 :  Navigator "Link out" to third party application This is an example of where the developer has added a link to the global navigator and this links out to the 3rd Party Application. Typically one doesn't pass any contextual data with the exception of perhaps user credentials, or better still JWT Token. Techniques Used   Adding Link to Menu Item Using JWT Token in Sales Cloud Use Case 2 : Application Embedded within the Sales Cloud Dashboard Within the Oracle Sales Cloud application there is a tab called "Sales", within this tab its possible to embed a SubTab and embed a iFrame pointing to your application. To do this the developer simply needs to edit the page in customization mode, add the tab and then add the iFrame, simples! The developer can pass credentials/JWT Token and some other pieces of data but not object data (ie the current OpportunityID etc)  Techniques Used Adding a page to the dashboard  Using JWT Token in Sales Cloud  Use Case 3 : Embedding a Tab and Context Linking out from a Sales Cloud object to the 3rd party application In this usecase the developer embeds two components into Oracle Sales Cloud. The first is a SubTab showing summary data to the user (a quote in our case) and then secondly a hyperlink, (although it could be a button) which when clicked navigates the user to the 3rd party application. In this case the developer almost always passes context specific data (i.e. the opportunityId) and a security token (username password combo or JWT Token). The third party application usually takes the data, perhaps queries more data using the Sales Cloud SOAP/WebService interface and then displays the resulting mashup to the user for further processing. When the user has finished their work in the 3rd party application they normally navigate back to Oracle Sales Cloud using what's called a "DeepLink", ie taking them back to the object [opportunity in our case] they came from. This image visually shows a "Happy Path" a user may follow, and combines linking out to an application , webservice calls and deep linking back to Sales Cloud. Techniques Used Extending a SalesCloud application with a custom button Using JWT Token in Sales Cloud Extending Oracle Sales Cloud [Opportnity] with a custom tab exposing External Content Retrieving Data from Oracle Sales cloud using WebServices Coding some groovy script to generate the URLs required (Doc 1571200.1 on MyOracle Support) DeepLinking to specific Oracle Sales Cloud Pages (Doc 1516151.1 on My Oracle Support) Use-Case 4 :  Server Side processing/synchronization This usecase focuses on the Server Side processing of data, in this case synchronizing data. Here the 3rd party application is running on a "timer", e.g. cron or similar, and when triggered it queries data from Oracle Sales Cloud, then it queries data from the 3rd party application, determines the deltas and then inserts the data where required. Specifically here we are calling Oracle Sales Cloud using SOAP/WebServices and the 3rd party application is being communicated to using the REST API, for Oracle Sales Cloud one would use standard JAX-WS WebService calls and for REST one would use the JAX-RS api and perhap the Jackson api for managing JSON objects.. This is a very common use case and one which specifically lends itself to using the Oracle Java Cloud Service as the ideal application server where to host the mediator between the two applications.  Techniques Used Using JWT Token in Sales Cloud Integrating with the Oracle Java Cloud Service Retrieving Data from Oracle Sales cloud using WebServices General Resources The above is just a small set of techniques and use-cases which are used today. There are plenty of other sources of documentation and resources available on the internet but to get you started here are a few of my favourite places  Sales Cloud General Documentation Sales Cloud Customize Tab is useful for general customization of Sales Cloud Sales Cloud Integration Tab focuses on the 3rd party integration techniques  Official Oracle Fusion Developer Relations Blog Official Oracle Fusion Developer Relations YouTube Channel Enjoy integrating! 

    Read the article

  • LIMBO fails on startup with Internal errors - invalid parameters received

    - by user61262
    I installed LIMBO from the Humble Bundle V and as far as I am aware, this has wine packaged with it (I also installed the latest from the repo's in case is was because of that). However the game doesn't even start and fails with the message: Wine Program Error Internal errors - invalid parameters received. Is there a way to log the error or does anyone know why this happens? This question was asked previously but it seems to have disappeared. My Graphics cards is a Geforece GT 250 Cheers ice. [edit: Wine outputs the following error: wine /opt/limbo/support/limbo/drive_c/Program\ Files/limbo/limbo.exe fixme:system:SystemParametersInfoW Unimplemented action: 59 (SPI_SETSTICKYKEYS) fixme:system:SystemParametersInfoW Unimplemented action: 53 (SPI_SETTOGGLEKEYS) fixme:system:SystemParametersInfoW Unimplemented action: 51 (SPI_SETFILTERKEYS) fixme:win:EnumDisplayDevicesW ((null),0,0x32f580,0x00000000), stub! err:x11settings:X11DRV_ChangeDisplaySettingsEx No matching mode found 1920x1080x32 @60! (XRandR) err:xrandr:X11DRV_XRandR_SetCurrentMode Resolution change not successful -- perhaps display has changed? wine: Unhandled page fault on read access to 0x00000000 at address 0x48213e (thread 0009), starting debugger... The debugger has the following output: Unhandled exception: page fault on read access to 0x00000000 in 32-bit code (0x0048213e). Register dump: CS:0073 SS:007b DS:007b ES:007b FS:0033 GS:003b EIP:0048213e ESP:0032f9f4 EBP:0037cdd0 EFLAGS:00010202( R- -- I - - - ) EAX:00000000 EBX:00000000 ECX:00000000 EDX:0037cf4c ESI:0037cda8 EDI:0037cdcc Stack dump: 0x0032f9f4: 0037cda8 0034c708 7bc35120 00000000 0x0032fa04: 0037cda8 0032fa38 0079fc58 00000000 0x0032fa14: 0048b7d4 00000001 0037cdcc 00000001 0x0032fa24: 00000780 00000438 0034c620 00000000 0x0032fa34: 0034c708 0032fa78 007a04e2 00000002 0x0032fa44: 0048c4bc 00000780 00000438 0037cda8 Backtrace: =>0 0x0048213e in limbo (+0x8213e) (0x0037cdd0) 0x0048213e: movl 0x0(%eax),%edx Modules: Module Address Debug info Name (103 modules) PE 400000- 926000 Export limbo PE 10000000-101ff000 Deferred d3dx9_43 ELF 79bb3000-7b800000 Deferred libnvidia-glcore.so.295.53 ELF 7b800000-7ba15000 Deferred kernel32<elf> \-PE 7b810000-7ba15000 \ kernel32 ELF 7bc00000-7bcc3000 Deferred ntdll<elf> \-PE 7bc10000-7bcc3000 \ ntdll ELF 7bf00000-7bf04000 Deferred <wine-loader> ELF 7d7e0000-7d7e4000 Deferred libnvidia-tls.so.295.53 ELF 7d7e4000-7d8bc000 Deferred libgl.so.1 ELF 7d9d0000-7d9d9000 Deferred librt.so.1 ELF 7d9d9000-7d9de000 Deferred libgpg-error.so.0 ELF 7d9de000-7d9f6000 Deferred libresolv.so.2 ELF 7d9f6000-7d9fa000 Deferred libkeyutils.so.1 ELF 7d9fa000-7da43000 Deferred libdbus-1.so.3 ELF 7da43000-7da55000 Deferred libp11-kit.so.0 ELF 7da55000-7dada000 Deferred libgcrypt.so.11 ELF 7dada000-7daec000 Deferred libtasn1.so.3 ELF 7daec000-7daf5000 Deferred libkrb5support.so.0 ELF 7daf5000-7dafa000 Deferred libcom_err.so.2 ELF 7dafa000-7db22000 Deferred libk5crypto.so.3 ELF 7db22000-7dbf1000 Deferred libkrb5.so.3 ELF 7dbf1000-7dc03000 Deferred libavahi-client.so.3 ELF 7dc03000-7dc11000 Deferred libavahi-common.so.3 ELF 7dc11000-7dcd5000 Deferred libgnutls.so.26 ELF 7dcd5000-7dd13000 Deferred libgssapi_krb5.so.2 ELF 7dd13000-7dd66000 Deferred libcups.so.2 ELF 7dd94000-7ddc8000 Deferred uxtheme<elf> \-PE 7dda0000-7ddc8000 \ uxtheme ELF 7ddc8000-7ddd3000 Deferred libxcursor.so.1 ELF 7ddd4000-7dde7000 Deferred gnome-keyring-pkcs11.so ELF 7de47000-7de4d000 Deferred libxfixes.so.3 ELF 7deac000-7ded6000 Deferred libexpat.so.1 ELF 7ded6000-7df0a000 Deferred libfontconfig.so.1 ELF 7df0a000-7df1a000 Deferred libxi.so.6 ELF 7df1a000-7df1e000 Deferred libxcomposite.so.1 ELF 7df1e000-7df27000 Deferred libxrandr.so.2 ELF 7df27000-7df31000 Deferred libxrender.so.1 ELF 7df31000-7df37000 Deferred libxxf86vm.so.1 ELF 7df37000-7df3b000 Deferred libxinerama.so.1 ELF 7df3b000-7df5d000 Deferred imm32<elf> \-PE 7df40000-7df5d000 \ imm32 ELF 7df5d000-7df64000 Deferred libxdmcp.so.6 ELF 7df64000-7df85000 Deferred libxcb.so.1 ELF 7df85000-7df9f000 Deferred libice.so.6 ELF 7df9f000-7e0d3000 Deferred libx11.so.6 ELF 7e0d3000-7e0e5000 Deferred libxext.so.6 ELF 7e0e5000-7e178000 Deferred winex11<elf> \-PE 7e0f0000-7e178000 \ winex11 ELF 7e178000-7e18e000 Deferred libz.so.1 ELF 7e18e000-7e228000 Deferred libfreetype.so.6 ELF 7e228000-7e247000 Deferred libtinfo.so.5 ELF 7e247000-7e269000 Deferred libncurses.so.5 ELF 7e27d000-7e292000 Deferred xinput1_3<elf> \-PE 7e280000-7e292000 \ xinput1_3 ELF 7e292000-7e2a6000 Deferred psapi<elf> \-PE 7e2a0000-7e2a6000 \ psapi ELF 7e2a6000-7e304000 Deferred dbghelp<elf> \-PE 7e2b0000-7e304000 \ dbghelp ELF 7e304000-7e391000 Deferred msvcrt<elf> \-PE 7e320000-7e391000 \ msvcrt ELF 7e391000-7e4c5000 Deferred wined3d<elf> \-PE 7e3a0000-7e4c5000 \ wined3d ELF 7e4c5000-7e4fe000 Deferred d3d9<elf> \-PE 7e4d0000-7e4fe000 \ d3d9 ELF 7e4fe000-7e573000 Deferred rpcrt4<elf> \-PE 7e510000-7e573000 \ rpcrt4 ELF 7e573000-7e67b000 Deferred ole32<elf> \-PE 7e590000-7e67b000 \ ole32 ELF 7e67b000-7e697000 Deferred dinput8<elf> \-PE 7e680000-7e697000 \ dinput8 ELF 7e697000-7e6d1000 Deferred winspool<elf> \-PE 7e6a0000-7e6d1000 \ winspool ELF 7e6d1000-7e7c9000 Deferred comctl32<elf> \-PE 7e6e0000-7e7c9000 \ comctl32 ELF 7e7c9000-7e833000 Deferred shlwapi<elf> \-PE 7e7e0000-7e833000 \ shlwapi ELF 7e833000-7ea44000 Deferred shell32<elf> \-PE 7e840000-7ea44000 \ shell32 ELF 7ea44000-7eb23000 Deferred comdlg32<elf> \-PE 7ea50000-7eb23000 \ comdlg32 ELF 7eb23000-7eb3c000 Deferred version<elf> \-PE 7eb30000-7eb3c000 \ version ELF 7eb3c000-7eb9c000 Deferred advapi32<elf> \-PE 7eb50000-7eb9c000 \ advapi32 ELF 7eb9c000-7ec59000 Deferred gdi32<elf> \-PE 7ebb0000-7ec59000 \ gdi32 ELF 7ec59000-7ed99000 Deferred user32<elf> \-PE 7ec70000-7ed99000 \ user32 ELF 7ef99000-7efa6000 Deferred libnss_files.so.2 ELF 7efa6000-7efc0000 Deferred libnsl.so.1 ELF 7efc0000-7efec000 Deferred libm.so.6 ELF 7efee000-7eff4000 Deferred libuuid.so.1 ELF 7eff4000-7f000000 Deferred libnss_nis.so.2 ELF b7411000-b7415000 Deferred libxau.so.6 ELF b7415000-b741e000 Deferred libnss_compat.so.2 ELF b741f000-b7424000 Deferred libdl.so.2 ELF b7424000-b75ca000 Deferred libc.so.6 ELF b75cb000-b75e6000 Deferred libpthread.so.0 ELF b75e9000-b75f2000 Deferred libsm.so.6 ELF b75fa000-b773c000 Dwarf libwine.so.1 ELF b773e000-b7760000 Deferred ld-linux.so.2 ELF b7760000-b7761000 Deferred [vdso].so Threads: process tid prio (all id:s are in hex) 00000008 (D) Z:\opt\limbo\support\limbo\drive_c\Program Files\limbo\limbo.exe 00000009 0 <== 0000000e services.exe 00000020 0 0000001f 0 00000019 0 00000018 0 00000017 0 00000015 0 00000010 0 0000000f 0 00000012 winedevice.exe 0000001d 0 0000001a 0 00000014 0 00000013 0 0000001b plugplay.exe 00000021 0 0000001e 0 0000001c 0 00000022 explorer.exe 00000023 0 System information: Wine build: wine-1.4 Platform: i386 Host system: Linux Host version: 3.2.0-24-generic-pae

    Read the article

  • How to structure game states in an entity/component-based system

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something? EDIT: I might not have explained my framework well enough. My components are just data. If I was coding in C++, they'd probably be structs. Here's an example of one: public class BoundsComponent implements GameComponent { /** * The position of the game object. */ private Point pos_; /** * The size of the game object. */ private Dimension size_; /** * Constructor * Creates a new BoundsComponent for a game object with initial position * initialPos and initial size initialSize. The position and size combine * to make up the bounds. * @param initialPos The initial position of the game object. * @param initialSize The initial size of the game object. */ public BoundsComponent(Point initialPos, Dimension initialSize) { pos_ = initialPos; size_ = initialSize; } /** * Method: getBounds * @return The bounds of the game object. */ public Rectangle getBounds() { return new Rectangle(pos_, size_); } /** * Method: setPos * Sets the position of the game object to newPos. * @param newPos The value to which the position of the game object is * set. */ public void setPos(Point newPos) { pos_ = newPos; } } My components do not communicate with each other. Systems handle inter-component communication. My systems also do not communicate with each other. They have separate functionality and can easily be kept separate. The MovementSystem doesn't need to know what the RenderingSystem is rendering to move the game objects correctly; it just need to set the right values on the components, so that when the RenderingSystem renders the game objects, it has accurate data. The game state could not be a system, because it needs to interact with the systems rather than the components. It's not setting data; it's determining which functions need to be called. A GameStateComponent wouldn't make sense because all the game objects share one game state. Components are what make up objects and each one is different for each different object. For example, the game objects cannot have the same bounds. They can have overlapping bounds, but if they share a BoundsComponent, they're really the same object. Hopefully, this explanation makes my framework less confusing.

    Read the article

  • Beginner Guide to User Styles for Firefox

    - by Asian Angel
    While the default styles for most websites are nice there may be times when you would love to tweak how things look. See how easy it can be to change how websites look with the Stylish Extension for Firefox. Note: Scripts from Userstyles.org can also be added to Greasemonkey if you have it installed. Getting Started After installing the extension you will be presented with a first run page. You may want to keep it open so that you can browse directly to the Userstyles.org website using the link in the upper left corner. In the lower right corner you will have a new Status Bar Icon. If you have used Greasemonkey before this icon works a little differently. It will be faded out due to no user style scripts being active at the moment. You can use either a left or right click to access the Context Menu. The user style script management section is also added into your Add-ons Management Window instead of being separate. When you reach the user style scripts homepage you can choose to either learn more about the extension & scripts or… Start hunting for lots of user style script goodness. There will be three convenient categories to get you jump-started if you wish. You could also conduct a search if you have something specific in mind. Here is some information directly from the website provided for your benefit. Notice the reference to using these scripts with Greasemonkey… This section shows you how the scripts have been categorized and can give you a better idea of how to search for something more specific. Finding & Installing Scripts For our example we decided to look at the Updated Styles Section”first. Based on the page number listing at the bottom there are a lot of scripts available to look through. Time to refine our search a little bit… Using the drop-down menu we selected site styles and entered Yahoo in the search blank. Needless to say 5 pages was a lot easier to look through than 828. We decided to install the Yahoo! Result Number Script. When you do find a script (or scripts) that you like simply click on the Install with Stylish Button. A small window will pop up giving you the opportunity to preview, proceed with the installation, edit the code, or cancel the process. Note: In our example the Preview Function did not work but it may be something particular to the script or our browser’s settings. If you decide to do some quick editing the window shown above will switch over to this one. To return to the previous window and install the user style script click on the Switch to Install Button. After installing the user style the green section in the script’s webpage will actually change to this message… Opening up the Add-ons Manager Window shows our new script ready to go. The script worked perfectly when we conducted a search at Yahoo…the Status Bar Icon also changed from faded out to full color (another indicator that everything is running nicely). Conclusion If you prefer a custom look for your favorite websites then you can have a lot of fun experimenting with different user style scripts. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to your browser. Links Download the Stylish Extension (Mozilla Add-ons) Visit the Userstyles.org Website Install the Yahoo! Result Number User Style Similar Articles Productive Geek Tips Spice Up that Boring about:blank Page in FirefoxExpand the Add Bookmark Dialog in Firefox by DefaultEnjoy How-To Geek User Style Script GoodnessAuto-Hide Your Cluttered Firefox Status Bar ItemsBeginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • How do I install on an UEFI Asus 1215b netbook?

    - by Tarek
    I'm trying to install Ubuntu 11.10 on a UEFI netbook Asus 1215b using an USB stick. I created a fat32 efi partition of 100MB, 2GB swap, and 2 ext4 partitions (for root (/ ) and /home, respectively). While installing, Ubuntu switches to CLI and starts running efibootmgr. After a few commands (sadly I don't have a screen grab), it stops displaying text but it's still running judging by the HDD led. Then, there's a weird graphic glitch and the screen turns off (HDD led still indicating activity). Finally, it just stops, but doesn't turn off. Not even a hard reboot works (holding down the power button a few secs). I have to plug the netbook off and remove the battery. After that, it still doesn't boot Ubuntu... Anyway, what can I do? I'm considering following the footsteps here and here. Edit: here is the syslog $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] BUG: unable to handle kernel paging request at 00000000ffe1867c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] IP: [<ffff880066d44c1f>] 0xffff880066d44c1e $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] PGD 14ecc067 PUD 0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Oops: 0000 [#1] SMP $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CPU 0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Modules linked in: cryptd aes_x86_64 ufs qnx4 hfsplus hfs minix ntfs msdos xfs reiserfs jfs bnep parport_pc rfcomm dm_crypt ppdev bluetooth lp parport joydev eeepc_wmi asus_wmi sparse_keymap uvcvideo videodev v4l2_compat_ioctl32 snd_hda_codec_realtek snd_seq_midi snd_hda_codec_hdmi snd_hda_intel snd_hda_codec arc4 snd_rawmidi snd_hwdep psmouse snd_pcm snd_seq_midi_event ath9k serio_raw sp5100_tco i2c_piix4 k10temp snd_seq mac80211 snd_timer ath9k_common ath9k_hw snd_seq_device ath snd cfg80211 soundcore snd_page_alloc binfmt_misc squashfs overlayfs nls_iso8859_1 nls_cp437 vfat fat dm_raid45 xor dm_mirror dm_region_hash dm_log btrfs zlib_deflate libcrc32c usb_storage uas radeon video ahci libahci ttm drm_kms_helper drm wmi i2c_algo_bit atl1c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Pid: 28432, comm: efibootmgr Not tainted 3.0.0-12-generic #20-Ubuntu ASUSTeK Computer INC. 1215B/1215B $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RIP: 0010:[<ffff880066d44c1f>] [<ffff880066d44c1f>] 0xffff880066d44c1e $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RSP: 0018:ffff88005e2cbab0 EFLAGS: 00010082 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RAX: 00000000ffe1867c RBX: 0000000000000009 RCX: 00000000ffe1867c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RDX: 0000000000000000 RSI: ffff88005e2cbbea RDI: ffff88005e2cbb40 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RBP: 00000000ffe1867c R08: 0000000000000000 R09: 0000000000000084 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] R10: ffffc9001101ff83 R11: ffffc90011018685 R12: 0000000000000001 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] R13: 0000000000000000 R14: ffffc9001101867c R15: ffff88005e2cbbe1 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] FS: 00007f9cdde13720(0000) GS:ffff880066a00000(0000) knlGS:0000000000000000 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CR2: 00000000ffe1867c CR3: 000000002dace000 CR4: 00000000000006f0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Process efibootmgr (pid: 28432, threadinfo ffff88005e2ca000, task ffff880014f0dc80) $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Stack: $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] ffffc90011010000 ffff88005e2cbac8 0000000000010000 ffff880066d4401d $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] 000000000000007c ffff880009e84400 0000000000000090 ffff880066d45738 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] ffffc9001101867c ffff880066d4331c 0000000000000009 ffffc9001101867b $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Call Trace: $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff815e9efe>] ? _raw_spin_lock+0xe/0x20 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9c2d>] ? open+0x10d/0x1b0 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff8116554b>] ? __dentry_open+0x2bb/0x320 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9b20>] ? bin_vma_open+0x70/0x70 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff815e9efe>] ? _raw_spin_lock+0xe/0x20 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811849ee>] ? vfsmount_lock_local_unlock+0x1e/0x30 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff8104303b>] ? efi_call5+0x4b/0x80 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff81042a7f>] ? virt_efi_set_variable+0x2f/0x40 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff814bb125>] ? efivar_create+0x1e5/0x280 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9d63>] ? write+0x93/0x190 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff811d9de4>] ? write+0x114/0x190 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff81167813>] ? vfs_write+0xb3/0x180 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff81167b3a>] ? sys_write+0x4a/0x90 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] [<ffffffff815f22c2>] ? system_call_fastpath+0x16/0x1b $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] Code: ec 01 75 f0 41 bc 01 00 00 00 e8 e5 fb ff ff e8 e4 fc ff ff 33 c0 44 0f b7 c0 66 3b c3 73 20 41 0f b7 c0 41 0f b7 d0 03 c5 8b c8 <8a> 00 42 38 04 3a 75 0a 66 45 03 c4 66 44 3b c3 72 e2 33 c0 66 $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RIP [<ffff880066d44c1f>] 0xffff880066d44c1e $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] RSP <ffff88005e2cbab0> $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] CR2: 00000000ffe1867c $Oct 21 01:05:17 ubuntu kernel: [ 1220.544009] ---[ end trace 493844b002da4787 ]---

    Read the article

  • Customizing UPK outputs (Part 2 - Player)

    - by [email protected]
    There are a few things that can be done to give the Player output a personalized look to match your corporate branding. In my previous post, I talked about changing the logo. In addition to the logo, you can change the graphic in the heading, button colors, border colors and many other items. Prior to making any customizations, I strongly recommend making a copy of the existing Player style. This will give you a backup in case things go wrong. I'd also recommend that you create your own brand. This way, when you install the newest updates from us, your brand will remain intact. Creating your own brand is pretty easy. Make sure you have modify permissions on the publishing styles directory, if you are using a multi-user installation. Under the Publishing/Styles folder, create a new folder with your company name. Copy all the publishing styles from the UPK folder to your newly created folder. Now, when you go through the Publishing wizard, you will have two categories to choose from: the UPK category or your custom category. Now, for updating the Player output. First, the graphic that appears on the right hand side of the Player. If you're using a multi-user installation, check out the player style from your custom brand. Open the player style. Open the img folder. The file named "banner_image.png" represents the graphic that appears on the right hand side of the player. It is currently sized at 425 x 54. Try to keep your graphic about the same size. Rename your graphic file to be "banner_image.png", and drag it into the img folder. Save the package. Check in the package if you are in a multi-user installation. You've just updated the banner heading! Next, let's work on updating some of the other colors in the player. All the customizable areas are located in the skin.css file which is in the root of the Player style. Many of our customers update the colors to match their own theme. You don't have to be a programmer to make these changes, honest. :) To change the colors in the player: Make a copy of the original skin.css file. (This is to make sure you have a working version to revert to, in case something goes wrong.) Open the skin.css file from the Player package. You can edit it using Notepad. Make the desired changes. Save the file. Save the package. Publish to view your new changes. When you open the skin.css, you will see groupings like this: .headerDivbar { height: 21px; background-color: #CDE2FD; } Change the value of the background-color to the color of your choice. Note that you cannot use "red" as a color, but rather you should enter the hexadecimal color code. If you don't know the color code, search the web for "hexadecimal colors" and you'll find many sites to provide the information. Here are a few of the variables that you can update. Heading: .headerDivbar -this changes the color of the banner that appears under the graphic Button colors: .navCellOn - changes the color of the mode buttons when your mouse is hovering on them. .navCellOff - changes the color of the mode buttons when the mouse is not over them Lines: .thorizontal - this is the color of the horizontal lines surrounding the outline .tvertical - this is the color of the vertical lines on the left and right margin in the outline. .tsep - this is the color of the line that separates the outline from the content area Search frame: .tocSearchColor - this is the color of the search area .tocFrameText - this is the background color of the TOC tree. Hint: If you want to try out the changes prior to updating the style, you can update the skin.css in some content you've already published for the player (it's located in the css folder of the player package). This way, you can immediately see the changes without going through publishing. Once you're happy with the changes, update the skin.css in player style. Want to customize more? Refer to the "Customizing the Player" section of the Content Development manual for more details on all the options in the skin.css that can be changed, and pictures of what each variable controls. I'd love to see how you've customized the player for your corporate needs. Also, if there are other areas of the player you'd like to modify but have not been able to, let us know. Feel free to share your thoughts in the comments. --Maria Cozzolino, Manager of Requirements & UI Design for UPK

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • Auto-Configuring SSIS Packages

    - by Davide Mauri
    SSIS Package Configurations are very useful to make packages flexible so that you can change objects properties at run-time and thus make the package configurable without having to open and edit it. In a complex scenario where you have dozen of packages (even in in the smallest BI project I worked on I had 50 packages), each package may have its own configuration needs. This means that each time you have to run the package you have to pass the correct Package Configuration. I usually use XML configuration files and I also force everyone that works with me to make sure that an object that is used in several packages has the same name in all package where it is used, in order to simplify configurations usage. Connection Managers are a good example of one of those objects. For example, all the packages that needs to access to the Data Warehouse database must have a Connection Manager named DWH. Basically we define a set of “global” objects so that we can have a configuration file for them, so that it can be used by all packages. If a package as some specific configuration needs, we create a specific – or “local” – XML configuration file or we set the value that needs to be configured at runtime using DTLoggedExec’s Package Parameters: http://dtloggedexec.davidemauri.it/Package%20Parameters.ashx Now, how we can improve this even more? I’d like to have a package that, when it’s run, automatically goes “somewhere” and search for global or local configuration, loads it and applies it to itself. That’s the basic idea of Auto-Configuring Packages. The “somewhere” is a SQL Server table, defined in this way In this table you’ll put the values that you want to be used at runtime by your package: The ConfigurationFilter column specify to which package that configuration line has to be applied. A package will use that line only if the value specified in the ConfigurationFilter column is equal to its name. In the above sample. only the package named “simple-package” will use the line number two. There is an exception here: the $$Global value indicate a configuration row that has to be applied to any package. With this simple behavior it’s possible to replicate the “global” and the “local” configuration approach I’ve described before. The ConfigurationValue contains the value you want to be applied at runtime and the PackagePath contains the object to which that value will be applied. The ConfiguredValueType column defined the data type of the value and the Checksum column is contains a calculated value that is simply the hash value of ConfigurationFilter plus PackagePath so that it can be used as a Primary Key to guarantee uniqueness of configuration rows. As you may have noticed the table is very similar to the table originally used by SSIS in order to put DTS Configuration into SQL Server tables: SQL Server SSIS Configuration Type: http://msdn.microsoft.com/en-us/library/ms141682.aspx Now, how it works? It’s very easy: you just have to call DTLoggedExec with the /AC option: DTLoggedExec.exe /FILE:”mypackage.dtsx” /AC:"localhost;ssis_auto_configuration;ssiscfg.configuration" the AC option expects a string with the following format: <database_server>;<database_name>;<table_name>; only Windows Authentication is supported. When DTLoggedExec finds an Auto-Configuration request, it injects a new connection manager in the loaded package. The injected connection manager is named $$DTLoggedExec_AutoConfigure and is used by the two SQL Server DTS Configuration ($$DTLoggedExec_Global and $$DTLoggedExec_Local) also injected by DTLoggedExec, used to load “local” and “global” configuration. Now, you may start to wonder why this approach cannot be used without having all this stuff going around, but just passing to a package always two XML DTS Configuration files, (to have to “local” and the “global” configurations) doing something like this: DTLoggedExec.exe /FILE:”mypackage.dtsx” /CONF:”global.dtsConfig” /CONF:”mypackage.dtsConfig” The problem is that this approach doesn’t work if you have, in one of the two configuration file, a value that has to be applied to an object that doesn’t exists in the loaded package. This situation will raise an error that will halt package execution. To solve this problem, you may want to create a configuration file for each package. Unfortunately this will make deployment and management harder, since you’ll have to deal with a great number of configuration files. The Auto-Configuration approach solve all these problems at once! We’re using it in a project where we have hundreds of packages and I can tell you that deployment of packages and their configuration for the pre-production and production environment has never been so easy! To use the Auto-Configuration option you have to download the latest DTLoggedExec release: http://dtloggedexec.codeplex.com/releases/view/62218 Feedback, as usual, are very welcome!

    Read the article

  • Grid Layouts in ADF Faces using Trinidad

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF Faces does provide a data table component but none to define grid layouts. Grids are common in web design and developers often try HTML table markup wrapped in an f:verbatim tag or directly added the page to build a desired layout. Usually these attempts fail, showing unpredictable results, However, ADF Faces does not provide a table layout component, but Apache MyFaces Trinidad does. The Trinidad trh:tableLayout component is a thin wrapper around the HTML table element and contains a series of row layout elements, trh:rowLayout. Each trh:rowLayout component may contain one or many trh:cellLayout components to format cells content. <trh:tableLayout id="tl1" halign="left">   <trh:rowLayout id="rl1" valign="top" halign="left">     <trh:cellFormat id="cf1" width="100" header="true">        <af:outputLabel value="Label 1" id="ol1"/>     </trh:cellFormat>     <trh:cellFormat id="cf2" header="true"                               width="300">        <af:outputLabel value="Label 2" id="outputLabel1"/>        </trh:cellFormat>      </trh:rowLayout>      <trh:rowLayout id="rowLayout1" valign="top" halign="left">        <trh:cellFormat id="cellFormat1" width="100" header="false">           <af:outputLabel value="Label 3" id="outputLabel2"/>        </trh:cellFormat>     </trh:rowLayout>        ... </trh:tableLayout> To add the Trinidad tag library to your ADF Faces projects ... Open the Component Palette and right mouse click into it Choose "Edit Tag Libraries" and select the Trinidad components. Move them to the "Selected Libraries" section and Ok the dialog.The first time you drag a Trinidad component to a page, the web.xml file is updated with the required filters Note: The Trinidad tags don't participate in the ADF Faces RC geometry management. However, they are JSF components that are part of the JSF request lifecycle. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF Faces RC components work well with Trinidad layout components that don't use PPR. The PPR implementation of Trinidad is different from the one in ADF Faces. However, when you mix ADF Faces components with Trinidad components, avoid Trinidad components that have integrated PPR behavior. Only use passive Trinidad components.See:http://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_tableLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_rowLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_cellFormat.html .

    Read the article

  • Detecting Duplicates Using Oracle Business Rules

    - by joeywong-Oracle
    Recently I was involved with a Business Process Management Proof of Concept (BPM PoC) where we wanted to show how customers could use Oracle Business Rules (OBR) to easily define some rules to detect certain conditions, such as duplicate account numbers, duplicate names, high transaction amounts, etc, in a set of transactions. Traditionally you would have to loop through the transactions and compare each transaction with each other to find matching conditions. This is not particularly nice as it relies on more traditional approaches (coding) and is not the most efficient way. OBR is a great place to house these types’ of rules as it allows users/developers to externalise the rules, in a simpler manner, externalising the rules from the message flows and allows users to change them when required. So I went ahead looking for some examples. After quite a bit of time spent Googling, I did not find much out in the blogosphere. In fact the best example was actually from...... wait for it...... Oracle Documentation! (http://docs.oracle.com/cd/E28271_01/user.1111/e10228/rules_start.htm#ASRUG228) However, if you followed the link there was not much explanation provided with the example. So the aim of this article is to provide a little more explanation to the example so that it can be better understood. Note: I won’t be covering the BPM parts in great detail. Use case: Payment instruction file is required to be processed. Before instruction file can be processed it needs to be approved by a business user. Before the approval process, it would be useful to run the payment instruction file through OBR to look for transactions of interest. The output of the OBR can then be used to flag the transactions for the approvers to investigate. Example BPM Process So let’s start defining the Business Rules Dictionary. For the input into our rules, we will be passing in an array of payments which contain some basic information for our demo purposes. Input to Business Rules And for our output we want to have an array of rule output messages. Note that the element I am using for the output is only for one rule message element and not an array. We will configure the Business Rules component later to return an array instead. Output from Business Rules Business Rule – Create Dictionary Fill in all the details and click OK. Open the Business Rules component and select Decision Functions from the side. Modify the Decision Function Configuration Select the decision function and click on the edit button (the pencil), don’t worry that JDeveloper indicates that there is an error with the decision function. Then click the Ouputs tab and make sure the checkbox under the List column is checked, this is to tell the Business Rules component that it should return an array of rule message elements. Updating the Decision Service Next we will define the actual rules. Click on Ruleset1 on the side and then the Create Rule in the IF/THEN Rule section. Creating new rule in ruleset Ok, this is where some detailed explanation is required. Remember that the input to this Business Rules dictionary is a list of payments, each of those payments were of the complex type PaymentType. Each of those payments in the Oracle Business Rules engine is treated as a fact in its working memory. Implemented rule So in the IF/THEN rule, the first task is to grab two PaymentType facts from the working memory and assign them to temporary variable names (payment1 and payment2 in our example). Matching facts Once we have them in the temporary variables, we can then start comparing them to each other. For our demonstration we want to find payments where the account numbers were the same but the account name was different. Suspicious payment instruction And to stop the rule from comparing the same facts to each other, over and over again, we have to include the last test. Stop rule from comparing endlessly And that’s it! No for loops, no need to keep track of what you have or have not compared, OBR handles all that for you because everything is done in its working memory. And once all the tests have been satisfied we need to assert a new fact for the output. Assert the output fact Save your Business Rules. Next step is to complete the data association in the BPM process. Pay extra care to use Copy List instead of the default Copy when doing data association at an array level. Input and output data association Deploy and test. Test data Rule matched Parting words: Ideally you would then use the output of the Business Rules component to then display/flag the transactions which triggered the rule so that the approver can investigate. Link: SOA Project Archive [Download]

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • PhP Login/Register system [migrated]

    - by Marian
    I found this good tutorial on creating a login/register system using PhP and MySQL. The forum is around 5 years old (edited last year) but it can still be usefull. Beginner Simple Register-Login system There seems to be an issue with both login and register pages. <?php function register_form(){ $date = date('D, M, Y'); echo "<form action='?act=register' method='post'>" ."Username: <input type='text' name='username' size='30'><br>" ."Password: <input type='password' name='password' size='30'><br>" ."Confirm your password: <input type='password' name='password_conf' size='30'><br>" ."Email: <input type='text' name='email' size='30'><br>" ."<input type='hidden' name='date' value='$date'>" ."<input type='submit' value='Register'>" ."</form>"; } function register(){ $connect = mysql_connect("host", "username", "password"); if(!$connect){ die(mysql_error()); } $select_db = mysql_select_db("database", $connect); if(!$select_db){ die(mysql_error()); } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $pass_conf = $_REQUEST['password_conf']; $email = $_REQUEST['email']; $date = $_REQUEST['date']; if(empty($username)){ die("Please enter your username!<br>"); } if(empty($password)){ die("Please enter your password!<br>"); } if(empty($pass_conf)){ die("Please confirm your password!<br>"); } if(empty($email)){ die("Please enter your email!"); } $user_check = mysql_query("SELECT username FROM users WHERE username='$username'"); $do_user_check = mysql_num_rows($user_check); $email_check = mysql_query("SELECT email FROM users WHERE email='$email'"); $do_email_check = mysql_num_rows($email_check); if($do_user_check > 0){ die("Username is already in use!<br>"); } if($do_email_check > 0){ die("Email is already in use!"); } if($password != $pass_conf){ die("Passwords don't match!"); } $insert = mysql_query("INSERT INTO users (username, password, email) VALUES ('$username', '$password', '$email')"); if(!$insert){ die("There's little problem: ".mysql_error()); } echo $username.", you are now registered. Thank you!<br><a href=login.php>Login</a> | <a href=index.php>Index</a>"; } switch($act){ default; register_form(); break; case "register"; register(); break; } ?> Once pressed the register button the page does nothing, fields are erased and no data is added inside the database or error given. I tought that the problem might be the switch($act){ part so I removed it and changed the page using a require require('connect.php'); where connect.php is <?php mysql_connect("localhost","host","password"); mysql_select_db("database"); ?> Removed the function register_form(){ and echo part turning it into an HTML code: <form action='register' method='post'> Username: <input type='text' name='username' size='30'><br> Password: <input type='password' name='password' size='30'><br> Confirm your password: <input type='password' name='password_conf' size='30'><br> Email: <input type='text' name='email' size='30'><br> <input type='hidden' name='date' value='$date'> <input type='submit' name="register" value='Register'> </form> And instead of having a function register(){ I replaced it with a if($register){ So when the Register button is pressed it runs the php code, but this edit doesn't seem to work either. So what can the problem be? If needed I can re-add this code on my Domain The login page has the same issue, nothing happens when the button is pressed beside emptying the fields.

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • XNA: Huge Tile Map, long load times

    - by Zach
    Recently I built a tile map generator for a game project. What I am very proud of is that I finally got it to the point where I can have a GIANT 2D map build perfectly on my PC. About 120000pixels by 40000 pixels. I can go larger actually, but I have only 1 draw back. #1 ram, the map currently draws about 320MB of ram and I know the Xbox allows 512MB I think? #2 It takes 20 mins for the map to build then display on the Xbox, on my PC it take less then a few seconds. I need to bring that 20 minutes of generating from 20 mins to how ever little bit I can, and how can a lower the amount of RAM usage while still being able to generate my map. Right now everything is stored in Jagged Arrays, each piece generating in a size of 1280x720 (the mother piece). Up to the amount that I need, every block is exactly 40x40 pixels however the blocks get removed from a List or regenerated in a List depending how close the mother piece is to the player. Saving A LOT of CPU, so at all times its no more then looping through 5184 some blocks. Well at least I'm sure of this. But how can I lower my RAM usage without hurting the size of the map, and how can I lower these INSANE loading times? EDIT: Let me explain my self better. Also I'd like to let everyone know now that I'm inexperienced with many of these things. So here is an example of the arrays I'm using. Here is the overall in a shorter term: int[][] array = new int[30][]; array[0] = new int[] { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; array[1] = new int[] { 1, 3, 3, 3, 3, 1, 0, 0, 0, 0, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; that goes on for around 30 arrays downward. Now for every time it hits a 1, it goes and generates a tile map 1280x720 and it does that exactly the way it does it above. This is how I loop through those arrays: for (int i = 0; i < array.Length; i += 1) { for (int h = 0; h < array[i].Length; h += 1) { } { Now how the tiles are drawn and removed is something like this: public void Draw(SpriteBatch spriteBatch, Vector2 cam) { if (cam.X >= this.Position.X - 1280) { if (cam.X <= this.Position.X + 2560) { if (cam.Y >= this.Position.Y - 720) { if (cam.Y <= this.Position.Y + 1440) { if (visible) { if (once == 0) { once = 1; visible = false; regen(); } } for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles[i].Draw(spriteBatch, cam); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles[i].Draw(spriteBatch, cam); } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } else { once = 0; for (int i = Tiles.Count - 1; i >= 0; i--) { Tiles.RemoveAt(i); } for (int i = unWalkTiles.Count - 1; i >= 0; i--) { unWalkTiles.RemoveAt(i); } } } } If you guys still need more information just ask in the comments.

    Read the article

  • Easy Made Easier

    - by dragonfly
        How easy is it to deploy a 2 node, fully redundant Oracle RAC cluster? Not very. Unless you use an Oracle Database Appliance. The focus of this member of Oracle's Engineered Systems family is to simplify the configuration, management and maintenance throughout the life of the system, while offering pay-as-you-grow scaling. Getting a 2-node RAC cluster up and running in under 2 hours has been made possible by the Oracle Database Appliance. Don't take my word for it, just check out these blog posts from partners and end users. The Oracle Database Appliance Experience - Zip Zoom Zoom http://www.fuadarshad.com/2012/02/oracle-database-appliance-experience.html Off-the-shelf Oracle database servers http://normanweaver.wordpress.com/2011/10/10/off-the-shelf-oracle-database-servers/ Oracle Database Appliance – Deployment Steps http://marcel.vandewaters.nl/oracle/database-appliance/oracle-database-appliance-deployment-steps     See how easy it is to deploy an Oracle Database Appliance for high availability with RAC? Now for the meat of this post, which is the first in a series of posts describing tips for making the deployment of an ODA even easier. The key to the easy deployment of an Oracle Database Appliance is the Appliance Manager software, which does the actual software deployment and configuration, based on best practices. But in order for it to do that, it needs some basic information first, including system name, IP addresses, etc. That's where the Appliance Manager GUI comes in to play, taking a wizard approach to specifying the information needed.     Using the Appliance Manager GUI is pretty straight forward, stepping through several screens of information to enter data in typical wizard style. Like most configuration tasks, it helps to gather the required information before hand. But before you rush out to a committee meeting on what to use for host names, and rely on whatever IP addresses might be hanging around, make sure you are familiar with some of the auto-fill defaults for the Appliance Manager. I'll step through the key screens below to highlight the results of the auto-fill capability of the Appliance Manager GUI.     Depending on which of the 2 Configuration Types (Config Type screen) you choose, you will get a slightly different set of screens. The Typical configuration assumes certain default configuration choices and has the fewest screens, where as the Custom configuration gives you the most flexibility in what you configure from the start. In the examples below, I have used the Custom config type.     One of the first items you are asked for is the System Name (System Info screen). This is used to identify the system, but also as the base for the default hostnames on following screens. In this screen shot, the System Name is "oda".     When you get to the next screen (Generic Network screen), you enter your domain name, DNS IP address(es), and NTP IP address(es). Next up is the Public Network screen, seen below, where you will see the host name fields are automatically filled in with default host names based on the System Name, in this case "oda". The System Name is also the basis for default host names for the extra ethernet ports available for configuration as part of a Custom configuration, as seen in the 2nd screen shot below (Other Network). There is no requirement to use these host names, as you can easily edit any of the host names. This does make filling in the configuration details easier and less prone to "fat fingers" if you are OK with these host names. Here is a full list of the automatically filled in host names. 1 2 1-vip 2-vip -scan 1-ilom 2-ilom 1-net1 2-net1 1-net2 2-net2 1-net3 2-net3     Another auto-fill feature of the Appliance Manager GUI follows a common practice of deploying IP Addresses for a RAC cluster in sequential order. In the screen shot below, I entered the first IP address (Node1-IP), then hit Tab to move to the next field. As a result, the next 5 IP address fields were automatically filled in with the next 5 IP addresses sequentially from the first one I entered. As with the host names, these are not required, and can be changed to whatever your IP address values are. One note of caution though, if the first IP Address field (Node1-IP) is filled out and you click in that field and back out, the following 5 IP addresses will be set to the sequential default. If you don't use the sequential IP addresses, pay attention to where you click that mouse. :-)     In the screen shot below, by entering the netmask value in the Netmask field, in this case 255.255.255.0, the gateway value was auto-filled into the Gateway field, based on the IP addresses and netmask previously entered. As always, you can change this value.     My last 2 screen shots illustrate that the same sequential IP address autofill and netmask to gateway autofill works when entering the IP configuration details for the Integrated Lights Out Manager (ILOM) for both nodes. The time these auto-fill capabilities save in entering data is nice, but from my perspective not as important as the opportunity to avoid data entry errors. In my next post in this series, I will touch on the benefit of using the network validation capability of the Appliance Manager GUI prior to deploying an Oracle Database Appliance.

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • Building a plug-in for Windows Live Writer

    - by mbcrump
    This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one. Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post.  It will do all of this automatically after you publish your post. Once, we have a new projected created. We need to setup the references. Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows. You will also need to add a reference to System.Windows.Forms System.Web from the .NET tab as well. Once that is complete, add your “using” statements so that it looks like whats shown below: Live Writer Plug-In "Using" using System; using System.Collections.Generic; using System.Text; using WindowsLive.Writer.Api; using System.Web; Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line: XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\" Your screen should look like the one pictured below: Next, we are going to launch an external program on debug. Click the debug tab and enter C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe Your screen should look like the one pictured below:   Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in. So, to create a GUID follow the steps in VS2008/2010. Click Tools from the VS Menu ->Create GUID It will generate a GUID like the one listed below: GUID <Guid("56ED8A2C-F216-420D-91A1-F7541495DBDA")> We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class. Live Writer Plug-In Attributes [WriterPlugin("56ED8A2C-F216-420D-91A1-F7541495DBDA",    "Generate Twitter Message",    Description = "After your new post has been published, this plug-in will attempt to generate a Twitter status messsage with the Title and TinyUrl link.",    HasEditableOptions = false,    Name = "Generate Twitter Message",    PublisherUrl = "http://michaelcrump.net")] [InsertableContentSource("Generate Twitter Message")] So far, it should look like the following: Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project. PublishNotificationHook public class Class1 :  PublishNotificationHook  {      public override void OnPostPublish(System.Windows.Forms.IWin32Window dialogOwner, IProperties properties, IPublishingContext publishingContext, bool publish)      {          if (!publish) return;          if (string.IsNullOrEmpty(publishingContext.PostInfo.Permalink))          {              PluginDiagnostics.LogError("Live Tweet didn't execute, due to blank permalink");          }          else          {                var strBlogName = HttpUtility.UrlEncode("#blogged : " + publishingContext.PostInfo.Title);  //Blog Post Title              var strUrlFinal = getTinyUrl(publishingContext.PostInfo.Permalink); //Blog Permalink URL Converted to TinyURL              System.Diagnostics.Process.Start("http://twitter.com/home?status=" + strBlogName + strUrlFinal);            }      } We are going to go ahead and create a method to create the short url (tinyurl). TinyURL Helper Method private static string getTinyUrl(string url) {     var cmpUrl = System.Globalization.CultureInfo.InvariantCulture.CompareInfo;     if (!cmpUrl.IsPrefix(url, "http://tinyurl.com"))     {         var address = "http://tinyurl.com/api-create.php?url=" + url;         var client = new System.Net.WebClient();         return (client.DownloadString(address));     }     return (url); } Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration. Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created. Your screen should look like the one pictured below: Go ahead and click OK and publish your blog post. You should get a pop-up with the following: Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:   That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful.

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >