Search Results

Search found 3227 results on 130 pages for 'jason smith'.

Page 9/130 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • WebCenter Content (WCC) Trace Sections

    - by Kevin Smith
    Kyle has a good post on how to modify the size and number of WebCenter Content (WCC) trace files. His post reminded me I have been meaning to write a post on WCC trace sections for a while. searchcache - Tells you if you query was found in the WCC search cache. searchquery - Shows the processing of the query as it is converted form what the user submitted to the end query that will be sent to the database. Shows conversion from the universal query syntax to the syntax specific to the search solution WCC is configured to use. services (verbose) - Lists the filters that are called for each service. This will let you know what filters are available for each service and will also tell you what filters are used by WCC add-on components and any custom components you have installed. The How To Component Sample has a list of filters, but it has not been updated since 7.5, so it is a little outdated now. With each new release WCC adds more filters. If you have a filter that has no code attached to it you will see output like this: services/6    09.25 06:40:26.270    IdcServer-423    Called filter event computeDocName with no filter plugins registered When a WCC add-on or custom component uses a filter you will see trace output like this: services/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionValidateCheckinData with parameter postValidateCheckinDataservices/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionFilters with parameter postValidateCheckinData As you can see from this sample output it is possible to have multiple code points using the same filter. systemdatabase - Dumps the database call AFTER it executes. This can be somewhat troublesome if you are trying to track down some weird database problems. We had a problem where WCC was getting into a deadlock situation. We turned on the systemdatabase trace section and thought we had the problem database call, but it turned out since it printed out the database call after it was executed we were looking at the database call BEFORE the one causing the deadlock. We ended up having to turn on tracing at the database level to see the database call WCC was making that was causing the deadlock. socketrequests (verbose) - dumps the actual messages received and sent over the socket connection by WCC for a service. If you have gzip enabled you will see junk on the response coming back from WCC. For debugging disable the gzip of the WCC response.Here is an example of the dump of the request for a GET_SEARCH_RESULTS service call. socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: REMOTE_USER=sysadmin.USER-AGENT=Java;.Stel socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: lent.CIS.11g.CONTENT_TYPE=text/html.HEADER socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: _ENCODING=UTF-8.REQUEST_METHOD=POST.CONTEN socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: T_LENGTH=270.HTTP_HOST=CIS.$$$$.NoHttpHead socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: ers=0.IsJava=1.IdcService=GET_SEARCH_RESUL socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: [email protected] socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: calData.SortField=dDocName.ClientEncoding= socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: UTF-8.IdcService=GET_SEARCH_RESULTS.UserTi socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: meZone=UTC.UserDateFormat=iso8601.SortDesc socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: =ASC.QueryText=dDocType..matches..`Documen socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: t`.@end. userstorage, jps - Provides trace details for user authentication and authorization. Includes information on the determination of what roles and accounts a user has access to. In 11g a new trace section, jps, was added with the addition of the JpsUserProvider to communicate with WebLogic Server. The WCC developers decide when to use the verbose option for their trace output, so sometime you need to try verbose to see what different information you get. One of the things I would always have liked to see if the ability to turn on verbose output selectively for individual trace sections. When you turn on verbose output you get it for all trace sections you have enabled. This can quickly fill up your trace files with a lot of information if you have the socket trace section turned on.

    Read the article

  • How best do you represent a bi-directional sync in a REST api?

    - by Edward M Smith
    Assuming a system where there's a Web Application with a resource, and a reference to a remote application with another similar resource, how do you represent a bi-directional sync action which synchronizes the 'local' resource with the 'remote' resource? Example: I have an API that represents a todo list. GET/POST/PUT/DELETE /todos/, etc. That API can reference remote TODO services. GET/POST/PUT/DELETE /todo_services/, etc. I can manipulate todos from the remote service through my API as a proxy via GET/POST/PUT/DELETE /todo_services/abc123/, etc. I want the ability to do a bi-directional sync between a local set of todos and the remote set of TODOS. In a rpc sort of way, one could do POST /todo_services/abc123/sync/ But, in the "verbs are bad" idea, is there a better way to represent this action?

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • Wacom Bamboo CTH460L issues in Ubuntu 10.04

    - by Robert Smith
    I recently bought a Wacom Bamboo Pen & Touch CTH460L. I installed doctormo PPA, however, the pen functionality didn't work and the touch was very glitchy (when I touched it, it immediately double clicked and began to drag elements in the screen). I tried to configure it using the wacom-utility package in the Synaptic Package Manager (version 1.21-1) but that didn't work either. Then I followed this post (#621, written by aaaalex), and after some problems trying to restart Ubuntu (graphics related problems), the pen works fine (it could be better, though) but the touch functionality doesn't work anymore. Currently I have installed xserver-xorg-input-wacom (1:0.10.11-0ubuntu7), wacom-dkms (0.8.10.2-1ubuntu1) and wacom-utility. The Wacom Utility only displays an "options" field under "Wacom BambooPT 2FG 4X5" but no other option to configure it. What is the correct way to get this tablet working on Ubuntu 10.04?. By the way, currently I can't start Ubuntu properly when the tablet is connected (in that case, Ubuntu start in low graphics mode). I need to connect it later.

    Read the article

  • update-java-alternatives vs update-alternatives --config java

    - by Stan Smith
    Thanks in advance from this Ubuntu noob... On Ubuntu 12.04 LTS I have installed Sun's JDK7, Eclipse, and the Arduino IDE. I want the Arduino to use OpenJDK 6 and want Eclipse to use Sun's JDK 7. From my understanding I need to manually choose which Java to use before running each application. This led me to the 'update-java-alternatives -l' command... when I run this I only see the following: java-1.6.0-openjdk-amd64 1061 /usr/lib/jvm/java-1.6.0-openjdk-amd64 ...but when I run 'update-alternatives --config java' I see the following: *0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java manual mode 2 /usr/lib/jvm/jdk1.7.0/bin/java manual mode 3 /usr/lib/jvm/jre1.7.0/bin/java manual mode I don't understand why the update-java-alternatives doesn't display the same 3 options. I also don't understand how to switch between OpenJDK6 and JDK7. Can someone please explain how I can go about using the OpenJDK6 for Arduino development and Sun JDK7 for Eclipse/Android development? Thank you in advance for any assistance or feedback you can offer. Stan

    Read the article

  • Is there a snippets program that allows for tab entry of variables across the mac?

    - by Jeremy Smith
    I love Sublime Text editor and the ability to create code snippets like so: <snippet> <content><![CDATA[\$("${1:selector}").${2:method}]]></content> <tabTrigger>jq</tabTrigger> </snippet> This allows me to type jq[tab] and then have it expand to $("selector").method where I am able to tab through the string 'selector' and 'method' in order to change them. But what I'd really like to do is use this same snippet when working in Chrome Dev Tools, so I was looking for a mac snippets program that could support this. However, the three programs that I looked at (Keyboard Maestro, Snippets, CodeBox) don't support the ability to tab through to highlight predetermined strings and change them. Does anyone know of an app that will do this?

    Read the article

  • PrairieDevCon &ndash; Slide Decks

    - by Dylan Smith
    PrairieDevCon 2010 was an awesome time.  Learned a lot, and had some amazing conversations.  You guys even managed to convince me that NoSQL databases might actually be useful after all.   For those interested here’s my slide decks from my two sessions: Agile In Action Database Change Management With Visual Studio

    Read the article

  • How do developers verify that software requirement changes in one system do not violate a requirement of downstream software systems?

    - by Peter Smith
    In my work, I do requirements gathering, analysis and design of business solutions in addition to coding. There are multiple software systems and packages, and developers are expected to work on any of them, instead of being assigned to make changes to only 1 system or just a few systems. How developers ensure they have captured all of the necessary requirements and resolved any conflicting requirements? An example of this type of scenario: Bob the developer is asked to modify the problem ticket system for a hypothetical utility repair business. They contract with a local utility company to provide this service. The old system provides a mechanism for an external customer to create a ticket indicating a problem with utility service at a particular address. There is a scheduling system and an invoicing system that is dependent on this data. Bob's new project is to modify the ticket placement system to allow for multiple addresses to entered by a landlord or other end customer with multiple properties. The invoicing system bills per ticket, but should be modified to bill per address. What practices would help Bob discover that the invoicing system needs to be changed as well? How might Bob discover what other systems in his company might need to be changed in order to support the new changes\business model? Let's say there is a documented specification for each system involved, but there are many systems and Bob is not familiar with all of them. End of example. We're often in this scenario, and we do have design reviews but management places ultimate responsibility for any defects (business process or software process) on the developer who is doing the design and the work. Some organizations seem to be better at this than others. How do they manage to detect and solve conflicting or incomplete requirements across software systems? We currently have a lot of tribal knowledge and just a few developers who understand the entire business and software chain. This seems highly ineffective and leads to problems at the requirements level.

    Read the article

  • Does Huawei E398 LTE Modem works with Netgear Router?

    - by Paul Smith
    I've been using a Netgear router sharing the signal through my Huawei E160 dongle, but following the speeds update and the device limit, it does not satisfy the demand. Through research, I've found that Huawei E398 is the latest and functional dongle so far. I'm going to update, but the confusion comes: Is it compatible for my Netgear router? Will it work? Any information will be appreciated. I found the unlocked E398 here.

    Read the article

  • Does the type of prior employers matter when applying for a new job?

    - by Peter Smith
    Is there a bias in industry regarding the kind of previous employers an applicant has had (Government contractors, researchers, small business, large corporations)? I'm currently working for a University as a generalist programmer and I like my job here. But I'm worried that if I had to switch jobs down the road and apply for a corporate job that my resume would be dismissed based on the fact that I'm working in academia.

    Read the article

  • GDL Presents: Women Techmakers with Diane Greene

    GDL Presents: Women Techmakers with Diane Greene Megan Smith co-hosts with Cloud Platform PM Lead Jessie Jiang. They will be exploring former VMWare CEO and current Google, Inc. board member Diane Greene's strategic thoughts about Cloud on a high-level, as well as the direction in which she sees the tech industry for women. Hosts: Megan Smith - Vice President, Google [x] | Jessie Jiang - Product Management Lead, Google Cloud Platform Guest: Diane Greene - Board of Directors, Google, Inc. From: GoogleDevelopers Views: 0 0 ratings Time: 01:00:00 More in Science & Technology

    Read the article

  • Is it customary for software companies to forbid code authors from taking credit for their work? do code authors have a say?

    - by J Smith
    The company I work for has decided that the source code for a set of tools they make available to customers is also going to be made available to those customers. Since I am the author of that source code, and since many source code files have my name written in them as part of class declaration documentation comments, I've been asked to remove author information from the source code files, even though the license headers at the beginning of each source file make it clear that the company is the owner of the code. Since I'm relatively new to this industry I was wondering whether it's considered typical for companies that decide to make their source code available to third parties to not allow the code authors to take some amount of credit for their work, even when it's clear that the code author is not the owner of the code. Am I right in assuming that I don't have a say on the matter?

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Should I add parameters to instance methods that use those instance fields as parameters?

    - by john smith optional
    I have an instance method that uses instance fields in its work. I can leave the method without that parameters as they're available to me, or I can add them to the parameter list, thus making my method more "generic" and not reliable on the class. On the other hand, additional parameters will be in parameters list. Which approach is preferable and why? Edit: at the moment I don't know if my method will be public or private. Edit2: clarification: both method and fields are instance level.

    Read the article

  • isometric drawing order with larger than single tile images - drawing order algorithm?

    - by Roger Smith
    I have an isometric map over which I place various images. Most images will fit over a single tile, but some images are slightly larger. For example, I have a bed of size 2x3 tiles. This creates a problem when drawing my objects to the screen as I get some tiles erroneously overlapping other tiles. The two solutions that I know of are either splitting the image into 1x1 tile segments or implementing my own draw order algorithm, for example by assigning each image a number. The image with number 1 is drawn first, then 2, 3 etc. Does anyone have advice on what I should do? It seems to me like splitting an isometric image is very non obvious. How do you decide which parts of the image are 'in' a particular tile? I can't afford to split up all of my images manually either. The draw order algorithm seems like a nicer choice but I am not sure if it's going to be easy to implement. I can't solve, in my head, how to deal with situations whereby you change the index of one image, which causes a knock on effect to many other images. If anyone has an resources/tutorials on this I would be most grateful.

    Read the article

  • How to run RCU from the command line

    - by Kevin Smith
    When I was trying to figure out how to run RCU on 64-bit Linux I found this post. It shows how to run RCU from the command line. It didn't actually work for me, so you can see my post on how to run RCU on 64-bit Linux. But, seeing how to run RCU from the command got me started thinking about running RCU from the command line to create the schema for WebCenter Content. That post got me part of the way there since it shows how run RCU silently from the command line, but to do this you need to know the name of the RCU component for WebCenter Content. I poked around in the RCU files and found the component name for WCC is CONTENTSERVER11. There is a contentserver11 directory in rcuHome/rcu/integration and when you look at the contentserver11.xml file you will see <RepositoryConfig COMP_ID="CONTENTSERVER11"> With the component name for WCC in hand I was able to use this command line to run RCU and create the schema for WCC. .../rcuHome/bin/rcu -silent -createRepository -databaseType ORACLE -connectString localhost:1521:orcl1 -dbUser sys -dbRole sysdba -schemaPrefix TEST -component CONTENTSERVER11 -f <rcu_passwords.txt To make the silent part work and not have it prompt you for the passwords needed (sys password and password for each schema) you use the -f option and specify a file containing the passwords, one per line, in the order the components are listed on the -component argument. Here is the output from rcu when I ran the above command. Processing command line ....Repository Creation Utility - Checking PrerequisitesChecking Global PrerequisitesRepository Creation Utility - Checking PrerequisitesChecking Component PrerequisitesRepository Creation Utility - Creating TablespacesValidating and Creating TablespacesRepository Creation Utility - CreateRepository Create in progress.Percent Complete: 0...Percent Complete: 100Repository Creation Utility: Create - Completion SummaryDatabase details:Host Name              : localhostPort                   : 1521Service Name           : ORCL1Connected As           : sysPrefix for (prefixable) Schema Owners : TESTRCU Logfile            : /u01/app/oracle/logdir.2012-09-26_07-53/rcu.logComponent schemas created:Component                            Status  LogfileOracle Content Server 11g - Complete Success /u01/app/oracle/logdir.2012-09-26_07-53/contentserver11.logRepository Creation Utility - Create : Operation Completed This works fine if you want to use the default tablespace sizes and options, but there does not seem to be a way to specify the tablespace options on the command line. You can specify the name of the tablespace and temp tablespace, but they must already exist in the database before running RCU. I guess you can always create the tablespaces first using your desired sizes and options and then run RCU and specify the tablespaces you created. When looking up the command line options in the RCU doc I found it has the list of components for each product that it supports. See Appendix B in the RCU User's Guide.

    Read the article

  • Solving Inbound Refinery PDF Conversion Issues, Part 1

    - by Kevin Smith
    Working with Inbound Refinery (IBR)  and PDF Conversion can be very frustrating. When everything is working smoothly you kind of forgot it is even there. Documents are cheeked into WebCenter Content (WCC), sent to IBR for conversion, converted to PDF, returned to WCC, and viola your Office documents have a nice PDF rendition available for viewing. Then a user checks in a bunch of password protected Word files, the conversions fail, your IBR queue starts backing up, users start calling asking why their document have not been released yet, and your spend a frustrating afternoon trying to recover and get things back running properly again. Password protected documents are one cause of PDF conversion failures, and I will cover those in a future blog post, but there are many other problems that can cause conversions to fail, especially when working with the WinNativeConverter and using the native applications, e.g. Word, to convert a document to PDF. There are other conversion options like PDFExportConverter which uses Oracle OutsideIn to convert documents directly to PDF without the need for the native applications. However, to get the best fidelity to the original document the native applications must be used. Many customers have tried PDFExportConverter, but have stayed with the native applications for conversion since the conversion results from PDFExportConverter were not as good as when the native applications are used. One problem I ran into recently, that at least has a easy solution, are Word documents that display a Show Repairs dialog when the document is opened. If you open the problem document yourself you will see this dialog. This will cause the conversion to time out. Any time the native application displays a dialog that requires user input the conversion will time out. The solution is to set add a setting for BulletProofOnCorruption to the registry for the user running Word on the IBR server. See this support note from Microsoft for details. The support note says to set the registry key under HKEY_CURRENT_USER, but since we are running IBR as a service the correct location is under HKEY_USERS\.DEFAULT. Also since in our environment we were using Office 2007, the correct registry key to use was: HKEY_USERS\.DEFAULT\Software\Microsoft\Office\11.0\Word\Options Once you have done this restart the IBR managed server and resubmit your problem document. It should now be converted successfully. For more details on IBR see the Oracle® WebCenter Content Administrator's Guide for Conversion.

    Read the article

  • Computing a normal matrix in conjunction with gluLookAt

    - by Chris Smith
    I have a hand-rolled camera class that converts yaw, pitch, and roll angles into a forward, side, and up vector suitable for calling gluLookAt. Using this camera class I can modify the model-view matrix to move about the 3D world just fine. However, I am having trouble when using this camera class (and associated model-view matrix) when trying to perform directional lighting in my vertex shader. The problem is that the light direction, (0, 1, 0) for example, is relative to where the 'camera is looking' and not the actual world coordinates. (Or is this eye coordinates vs. model coordinates?) I would like the light direction to be unaffected by the camera's viewing direction. For example, when the camera is looking down the Z axis the ground is lit correctly. However, if I point the camera straight at the ground, then it goes dark. This is (I think) because the light direction is parallel with the camera's 'up' vector which is perpendicular with the ground's normal vector. I tried computing the normal matrix without taking the camera's model view into account, but then none of my objects were rotated correctly. Sorry if this sounds vague. I suspect there is a straight forward answer, but I'm not 100% clear on how the normal matrix should be used for transforming vertex normals in my vertex shader. For reference, here is pseudo code for my rendering loop: pMatrix = new Matrix(); pMatrix = makePerspective(...) mvMatrix = new Matrix() camera.apply(mvMatrix); // Calls gluLookAt // Move the object into position. mvMatrix.translatev(position); mvMatrix.rotatef(rotation.x, 1, 0, 0); mvMatrix.rotatef(rotation.y, 0, 1, 0); mvMatrix.rotatef(rotation.z, 0, 0, 1); var nMatrix = new Matrix(); nMatrix.set(mvMatrix.get().getInverse().getTranspose()); // Set vertex shader uniforms. gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, new Float32Array(pMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, new Float32Array(mvMatrix.getFlattened())); gl.uniformMatrix4fv(shaderProgram.nMatrixUniform, false, new Float32Array(nMatrix.getFlattened())); // ... gl.drawElements(gl.TRIANGLES, this.vertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0); And the corresponding vertex shader: // Attributes attribute vec3 aVertexPosition; attribute vec4 aVertexColor; attribute vec3 aVertexNormal; // Uniforms uniform mat4 uMVMatrix; uniform mat4 uNMatrix; uniform mat4 uPMatrix; // Varyings varying vec4 vColor; // Constants const vec3 LIGHT_DIRECTION = vec3(0, 1, 0); // Opposite direction of photons. const vec4 AMBIENT_COLOR = vec4 (0.2, 0.2, 0.2, 1.0); float ComputeLighting() { vec4 transformedNormal = vec4(aVertexNormal.xyz, 1.0); transformedNormal = uNMatrix * transformedNormal; float base = dot(normalize(transformedNormal.xyz), normalize(LIGHT_DIRECTION)); return max(base, 0.0); } void main(void) { gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0); float lightWeight = ComputeLighting(); vColor = vec4(aVertexColor.xyz * lightWeight, 1.0) + AMBIENT_COLOR; } Note that I am using WebGL, so if the anser is use glFixThisProblem(...) any pointers on how to re-implement that on WebGL if missing would be appreciated.

    Read the article

  • How to Stress Test the Hard Drives in Your PC or Server

    - by Tim Smith
    You have the latest drives for your server.  You stacked the top-of-the line RAM in the system.  You run effective code for your system.  However, what throughput is your system capable of handling, and can you really trust the capabilities listed by hardware companies? How to Stress Test the Hard Drives in Your PC or Server How To Customize Your Android Lock Screen with WidgetLocker The Best Free Portable Apps for Your Flash Drive Toolkit

    Read the article

  • What are the 'must know' GDB commands?

    - by Chris Smith
    I'm starting to get the hang of GDB, but everything still feels much slower than when debugging in Eclipse or Visual Studio. Are there any GDB commands you find particularly useful/productive? My life became dramatically better when I discovered: list - Display source code near the current instruction But that is still pretty basic. (And unnecessary when running GDB from Emacs.) Is there any way to do things like setup a watch window? (Print and update the result of an expression every time execution stops.)

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Site migration and SEO impact

    - by John Smith
    I'd greatly appreciate a response on the following question relating to site migration and SEO impact. Here's some background on how my domain name and site is currently configured: My domain name provider has the following settings: host name @ is an A NAME record and points to IP address x.x.x.x host name www is an A NAME record and points to IP address x.x.x.x sub-domain host name new.example.com is an A NAME record and points to IP address x.x.x.x My hosting provider has the following settings: host record @ is an A NAME record and points to IP address x.x.x.x, folder home/public_html/old host record www is a C NAME record and points to example.com sub-domain host record new.example.com points to home/public_html/new I want to: point the domain (example.com AND www.example.com) to the content hosted under folder home/public_html/new, which is currently the content directory for new.example.com retire the content hosted under folder home/public_html/old retire the sub-domain host record new.example.com I believe the easiest method of doing this, is: removing the sub-domain host record new.example.com; and changing the following line in the .htaccess file in home/public_html from # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/old/ to # Change 'subdirectory' to be the directory you will use for your main domain. RewriteCond %{REQUEST_URI} !^/new/ But I don't understand how this will impact my SERP - ideally, I'd like it to remain the same. Research on this topic resulted in the following Google page, which was no help, and this related StackExchange question, which suggests that this should not affect my SERP (at least, not permanently). But I wanted to make certain with a more specific example, and hopefully contribute to the community at the same time. I'd appreciate any feedback on this. Is there a better/recommended method to migrate sites this way? Is there an SEO impact?

    Read the article

  • UML Class Diagram: Abstract or Interface?

    - by J Smith
    I am modeling a class diagram and have spotted an opportunity to simplify it slightly. What I want to know is, would this it be better to implement an abstract class or an interface? The scenario is this, I have the classes: Artist Genre Album Song All of which share the methods getName, setName, and getCount (playcount that is). Would it be best to create an abstract 'Music' class with the aforementioned abstract methods, or should I create an interface, since the classes that implement the interface have to include all of the interface's methods (I think, correct me if I'm wrong). I hope I've given enough detail, please ask questions if I haven't. Thanks!

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >