Search Results

Search found 1770 results on 71 pages for 'steve kehlet'.

Page 15/71 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Inventory management system design problem

    - by Steve F
    What are the conventions for item batch identifiers in inventory management systems? For example: A retail supermarket can order 'Item X' from either 'Supplier A' or 'Supplier B'. When it completes an order for the item from either supplier, it needs to store the record of the receipt. Inventory quantity for the item is increased upon receipt of the order. However it is also required to store some record of the supplier. Thus some sort of batch identifier is required. This batch identifier will uniquely identify the item received and the supplier from whom it is received. A new batch is created each time items are received in stock (for example, after an order). Hence, for purposes of accounting / auditing, information available to identify an item after it was sold comprises of ITEM_CODE, ITEM_NAME, BATCH_CODE. The BATCH_CODE is unique and is associated with DATE_RECEIVED, SUPPLIER_CODE, QTY_RECEIVED. Is this a complete system specification for the above scenario or has anything significant been left out?

    Read the article

  • Xlib: extension "GLX" missing on display ":0". Asus k53s

    - by Steve
    I had everything working fine, I use a number of openGL graphics software, example pymol, mgltools, vmd, ballview, rasmol etc. All of these give the error: Xlib: extension "GLX" missing on display ":0". and fail to initialize. I have an i7 asus k53s with nvidia gforce I need these to do my work. I tried the umblebee fix, and just removing all nvidia drivers, or rolling back. I do not know why these were working then stopped, but did notice a new nvidia console in the mu which I assume was from enabling the automatic nvidia feeds, etc? I also played with the xorg, however have no clue what settings are valid. In addition, the display is 50% of the time not recognized now? It just gives a geeic 640x480 and I have to login and out 10 times to get it to return to a normal setting. When I try and set it manual, there is no other setting allowed from the settings menus, and the terminal changes just get re set every time I log out?

    Read the article

  • Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when I was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • PrestaShop install SQL error

    - by Steve
    I am trying to install PrestaShop 1.4.0.17, and reach Step 3. I enter database information, which tests okay, and I choose the second option: Full mode: includes 100+ additional modules and demo products (FREE too!). I choose Next, and receive the error: Error while inserting data in the database: ‘CREATE TABLE `shop_county_zip_code` ( `id_county` INT NOT NULL , `from_zip_code` INT NOT NULL , `to_zip_code` INT NOT NULL , PRIMARY KEY ( `id_county` , `from_zip_code` , `to_zip_code` ) ) ENGINE=’ You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near \’\’ at line 6(Error: : 1064) This happens if I use either MyISAM, or InnoDB. Why is this happening? This also happens if I drop all database tables, and try again in simple mode. Is there a manual installation method?

    Read the article

  • .htaccess 301 Redirect for wildcard subdomains

    - by Steve
    I run Wordpress in Network mode, which means I can have multiple websites running off one installation of Wordpress. Each website runs as a subdomain. Wordpress handles this using .htaccess, and a wildcard subdomain pointing to the location of Wordpress, so there are no actual subdomains created in cPanel; just a wildcard subdomain in cPanel, ad Wordpress handles the rest. I want to 301 redirect http://one.example.com/portfolio to http://two.example.com/portfolio. If I only have 1 .htaccess file in the web root of example.com, how do I achieve this?

    Read the article

  • G5 quad core will not boot from Ubuntu CD

    - by Steve Howard
    I have a PPC G5 Quad Core with Leopard on one hard drive and I want to install Ubuntu on a second hard drive. The second drive is installed and formatted as a Mac OS Extended (Journaled) drive. I have had no success booting from a CD or DVD with various PPC versions of Ubuntu using any of the suggested keys such as "C, Option, or anything else. Booting into open firmware doesn't work as the system can't find the \install\yaboot file. I am using various CD's burned as iso disk images, but none will boot. I have reset the PRAM, etc, to no avail. Beginning to get very frustrated. Can someone shed some light and provide me with a command line in open firmware that will work, or else direct me to a confirmed PPC bootable version of Ubuntu please? I'd appreciate any help you can provide....

    Read the article

  • What happens between sprints?

    - by Steve Bennett
    I'm working on a project loosely following the scrum model. We're doing two week sprints. Something I'm not clear on (and don't have a book to consult) is exactly what is supposed to happen between sprints: there should be some "wrap" process, where the product gets built and delivered, but: how long does this typically take? should the whole team be involved? does it strictly have to finish before developers start working on the next sprint items? is this when code review and testing take place? There are three developers, adding up to about 1 FTE. So the sprints are indeed very short.

    Read the article

  • Data Source Use of Oracle Edition Based Redefinition (EBR)

    - by Steve Felts
    Edition-based redefinition is a new feature in the 11gR2 release of the Oracle database. It enables you to upgrade the database component of an application while it is in use, thereby minimizing or eliminating down time. It works by allowing for a pre-upgrade and post-upgrade view of the data to exist at the same time, providing a hot upgrade capability. You can then specify which view you want for a particular session.  See the Oracle Database Advanced Application Developer's Guide for further information. There is also a good white paper at Edition Based Definition. Using this feature of the Oracle database does not require any new WebLogic Server functionality. It is set for each connection in the pool automatically by simply specifying SQL ALTER SESSION SET EDITION = edition_name in the Init SQL parameter in the data source configuration. This can be configured either via the console or via WLST (setInitSQL on the JDBCConnectionPoolParams). This SQL statement is executed for each newly created physical database connection.Note that we are assuming that a data source references only one edition of the database. To make use of this feature, you would have an earlier version of the application with a data source that references the earlier EDITION and a later version of the application with a data source that references the later EDITION.   Once you start talking about multiple versions of a WLS application, you should be using the WLS "side-by-side" or "versioned" deployment feature.  See Developing Applications for Production Redeployment for more information.  By combining Oracle database EBR and WLS versioned deployment, the application can be failed over with no downtime, making the combination of features more powerful than either independently. There is a catch - you need to be running with a versioned database and a versioned application initially so then you can switch versions.  The recommended way to version a WLS application is to simply add the "Weblogic-Application-Version" property in the MANIFEST.MF file(you can also specify it at deployment time). The recommended way to configure the data source is to use a packaged data source descriptor that's stored in the ear or war so that everything is self-contained.  There are some restrictions.  You can't use a packaged data source with Logging Last Resource (LLR) - you need to use a system resource.  You can't use an application-scoped packaged data source with EmulateTwoPhaseCommit for the global-transactions-protocol with a versioned application - use a global scope.  See Configuring JDBC Application Modules for Deployment for more details. There's one known problem - it doesn't work correctly with an XA data source (patch available with bug 14075837).

    Read the article

  • What is the best book for the preparation of MCPD Exam 70-564 (Designing and Developing ASP.NET 3.5 Applications)?

    - by Steve Johnson
    Hi all, I have seen a couple of questions like this one and scanned through the answers but somehow the replies were not satisfactory or practical. So i wondered maybe people who have gone through it and may suggest a better approach for the preparation of this exam. Goal: My goal is actually NOT merely to pass that exam. I intend to actually master the skill. I have been into asp.net web development for approximately 1.5 years and I want to study something that really improves "Design and Development Skills" in Web Development in general and asp.net to be specific which i can put to use and build upon that. Please suggest a book that teaches professional Asp.Net design and development skills and approaches to quality development by taking through practice design scenarios and their solutions and through various case studies that involve design problems and their implemented solutions. Edit: I have found the Micorosoft training kits to be fairly interesting and helpful as these tend to increase knowledge. I have utilized a lot of things after getting a good explanation of things from the training kits. However, as far as Microsoft Training Kit for 70-564 is concerned, there are not a lot of good reviews about it. What i have read and searched on the net , the reviews on amazon and various forums, stack-exchange and experts-exchange, were more inclined to the conclusion that "Microsoft Training Kit for Exam 70-564 is not good. Its is not good as compared to other kits from Microsoft, like as compared to the training kit of Exam 70-562 or others." So i was looking for a proper book containing examples from practical world scenarios and case studies from which i can not only learn but also master the skills before wasting money of Microsoft Training Kit for Exam 70-564. Waiting for experts to provide a suitable advice.

    Read the article

  • How can I get SLI working with 295.40?

    - by Steve
    I've been doing a lot of googling these last few hours and I'm not having much luck. Perhaps I don't know exactly what I am looking for. I just recently installed Ubuntu 12.04LTS x86_64. Looks beautiful! I have two GTX470's in SLI, and I am finally migrating my desktop over given the hopeful gaming support as of late. My laptop has been enjoying multiple distros of Ubuntu for a couple years now. However, new problems come with unexplored territory, here. At first, I only had one working monitor of my two. Over on nvidia-xconfig I fixed that, but the only solution that actually worked was twinview. Just recently I read here that twinview is not compatible with SLI. Sweet. When I try to tell it, oh hey, use a separate XScreen, configure it the way I want it, click save to configuration file, enter my password, then a sudo restart lightdm, it's broken. One screen blacks or whites out (Couldn't tell you the specific conditions for each, I'm dubious at this point,) and I get this huge error dialogue box upon login. Something about incompatible resolutions if I remember right. Though I am sure I set the resolutions for each screen correctly. Anyway, when I try to enable SLI (sudo nvidia-xconfig --sli=On) despite the fact it hates twinview, unity breaks. The sidebar is there, but only one screen works, the mouse is trapped running along the left edge of it, and the background of the sidebar is a solid blue. Anyway, this ended up being entirely too verbose, I'm sorry, but could anyone part some wisdom please? It would be appreciated!

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Isometric tile range aquisition

    - by Steve
    I'm putting together an isometric engine and need to cull the tiles that aren't in the camera's current view. My tile coordinates go from left to right on the X and top to bottom on the Y with (0,0) being the top left corner. If I have access to say the top left, top right, bot left and bot right corner coordinates, is there a formula or something I could use to determine which tiles fall in range? I've linked a picture of the layout of the tiles for reference. If there isn't one, or there's a better way to determine which tiles are on screen and which to cull, I'm all ears and am grateful for any ideas. I've got a few other methods I may be able to try such as checking the position of the tile against a rectangle. I pretty much just need something quick. Thanks for giving this a read =)

    Read the article

  • Trying to use stencils in 2D while retaining layer depth

    - by Steve
    This is a screen of what's going on just so you can get a frame of reference. http://i935.photobucket.com/albums/ad199/fobwashed/tilefloors.png The problem I'm running into is that my game is slowing down due to the amount of texture swapping I'm doing during my draw call. Since walls, characters, floors, and all objects are on their respective sprite sheet containing those types, per tile draw, the loaded texture is swapping no less than 3 to 5+ times as it cycles and draws the sprites in order to layer properly. Now, I've tried throwing all common objects together into their respective lists, and then using layerDepth drawing them that way which makes things a lot better, but the new problem I'm running into has to do with the way my doors/windows are drawn on walls. Namely, I was using stencils to clear out a block on the walls that are drawn in the shape of the door/window so that when the wall would draw, it would have a door/window sized hole in it. This is the way my draw was set up for walls when I was going tile by tile rather than grouped up common objects. first it would check to see if a door/window was on this wall. If not, it'd skip all the steps and just draw normally. Otherwise end the current spriteBatch Clear the buffers with a transparent color to preserve what was already drawn start a new spritebatch with stencil settings draw the door area end the spriteBatch start a new spritebatch that takes into account the previously set stencil draw the wall which will now be drawn with a hole in it end that spritebatch start a new spritebatch with the normal settings to continue drawing tiles In the tile by tile draw, clearing the depth/stencil buffers didn't matter since I wasn't using any layerDepth to organize what draws on top of what. Now that I'm drawing from lists of common objects rather than tile by tile, it has sped up my draw call considerably but I can't seem to figure out a way to keep the stencil system to mask out the area a door or window will be drawn into a wall. The root of the problem is that when I end a spriteBatch to change the DepthStencilState, it flattens the current RenderTarget and there is no longer any depth sorting for anything drawn further down the line. This means walls always get drawn on top of everything regardless of depth or positioning in the game world and even on top of each other as the stencil has to happen once for every wall that has a door or window. Does anyone know of a way to get around this? To boil it down, I need a way to draw having things sorted by layer depth while also being able to stencil/mask out portions of specific sprites.

    Read the article

  • Will my traffic come back after my site redesign?

    - by Steve
    I screwed up. I launched my site after rebuilding it without setting up the proper 301's and traffic immediately dropped about 60%(it's not really something I thought about). After about a week and a half, I set the 301's back up yesterday and resubmitted my sitemap to google. Google has yet to index the whole thing, but traffic isn't getting any better. Is it likely to come back? If so, I. How long? Has this happened to you? Any info is appreciated. I am really anxious!

    Read the article

  • Maintenance Wizard -????

    - by Steve He(???)
    ?????Oracle EBS ??????????????????????????????????????,?????????????Maintenance Wizard ???E-Business Suite??????????????????,????????????????????EBS?????????????????????EBS??????????? ?????????????????????2000?????,???????Maintenance Wizard???????7?????????????????????????????? ?,??,Maintenance Wizard???????,????Oracle???????,??????????????????????????Oracle???! ??????: ?????????????????? ??????????,??????,????????? ?????? ????????,???????????,?????????????????? ??????????? ????????,??????????,?????????? ???????:?? Doc ID 215527.1 ????Maintenance Wizard?????? Doc ID 430732.1 ???????

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Faster Memory Allocation Using vmtasks

    - by Steve Sistare
    You may have noticed a new system process called "vmtasks" on Solaris 11 systems: % pgrep vmtasks 8 % prstat -p 8 PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 8 root 0K 0K sleep 99 -20 9:10:59 0.0% vmtasks/32 What is vmtasks, and why should you care? In a nutshell, vmtasks accelerates creation, locking, and destruction of pages in shared memory segments. This is particularly helpful for locked memory, as creating a page of physical memory is much more expensive than creating a page of virtual memory. For example, an ISM segment (shmflag & SHM_SHARE_MMU) is locked in memory on the first shmat() call, and a DISM segment (shmflg & SHM_PAGEABLE) is locked using mlock() or memcntl(). Segment operations such as creation and locking are typically single threaded, performed by the thread making the system call. In many applications, the size of a shared memory segment is a large fraction of total physical memory, and the single-threaded initialization is a scalability bottleneck which increases application startup time. To break the bottleneck, we apply parallel processing, harnessing the power of the additional CPUs that are always present on modern platforms. For sufficiently large segments, as many of 16 threads of vmtasks are employed to assist an application thread during creation, locking, and destruction operations. The segment is implicitly divided at page boundaries, and each thread is given a chunk of pages to process. The per-page processing time can vary, so for dynamic load balancing, the number of chunks is greater than the number of threads, and threads grab chunks dynamically as they finish their work. Because the threads modify a single application address space in compressed time interval, contention on locks protecting VM data structures locks was a problem, and we had to re-scale a number of VM locks to get good parallel efficiency. The vmtasks process has 1 thread per CPU and may accelerate multiple segment operations simultaneously, but each operation gets at most 16 helper threads to avoid monopolizing CPU resources. We may reconsider this limit in the future. Acceleration using vmtasks is enabled out of the box, with no tuning required, and works for all Solaris platform architectures (SPARC sun4u, SPARC sun4v, x86). The following tables show the time to create + lock + destroy a large segment, normalized as milliseconds per gigabyte, before and after the introduction of vmtasks: ISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1386 245 6X X7560 64 1016 153 7X M9000 512 1196 206 6X T5240 128 2506 234 11X T4-2 128 1197 107 11x DISM system ncpu before after speedup ------ ---- ------ ----- ------- x4600 32 1582 265 6X X7560 64 1116 158 7X M9000 512 1165 152 8X T5240 128 2796 198 14X (I am missing the data for T4 DISM, for no good reason; it works fine). The following table separates the creation and destruction times: ISM, T4-2 before after ------ ----- create 702 64 destroy 495 43 To put this in perspective, consider creating a 512 GB ISM segment on T4-2. Creating the segment would take 6 minutes with the old code, and only 33 seconds with the new. If this is your Oracle SGA, you save over 5 minutes when starting the database, and you also save when shutting it down prior to a restart. Those minutes go directly to your bottom line for service availability.

    Read the article

  • Making WatiN Wait for JQuery document.Ready() Functions to Complete

    - by Steve Wilkes
    WatiN's DomContainer.WaitForComplete() method pauses test execution until the DOM has finished loading, but if your page has functions registered with JQuery's ready() function, you'll probably want to wait for those to finish executing before testing it. Here's a WatiN extension method which pauses test execution until that happens. JQuery (as far as I can see) doesn't provide an event or other way of being notified of when it's finished running your ready() functions, so you have to get around it another way. Luckily, because ready() executes the functions it's given in the order they're registered, you can simply register another one to add a 'marker' div to the page, and tell WatiN to wait for that div to exist. Here's the code; I added the extension method to Browser rather than DomContainer (Browser derives from DomContainer) because it's the sort of thing you only execute once for each of the pages your test loads, so Browser seemed like a good place to put it. public static void WaitForJQueryDocumentReadyFunctionsToComplete(this Browser browser) { // Don't try this is JQuery isn't defined on the page: if (bool.Parse(browser.Eval("typeof $ == 'function'"))) { const string jqueryCompleteId = "jquery-document-ready-functions-complete"; // Register a ready() function which adds a marker div to the body: browser.Eval( @"$(document).ready(function() { " + "$('body').append('<div id=""" + jqueryCompleteId + @""" />'); " + "});"); // Wait for the marker div to exist or make the test fail: browser.Div(Find.ById(jqueryCompleteId)) .WaitUntilExistsOrFail(10, "JQuery document ready functions did not complete."); } } The code uses the Eval() method to send JavaScript to the browser to be executed; first to check that JQuery actually exists on the page, then to add the new ready() method. WaitUntilExistsOrFail() is another WatiN extension method I've written (I've ended up writing really quite a lot of them) which waits for the element on which it is invoked to exist, and uses Assert.Fail() to fail the test with the given message if it doesn't exist within the specified number of seconds. Here it is: public static void WaitUntilExistsOrFail(this Element element, int timeoutInSeconds, string failureMessage) { try { element.WaitUntilExists(timeoutInSeconds); } catch (WatinTimeoutException) { Assert.Fail(failureMessage); } }

    Read the article

  • Visual Web Part as a Sandboxed solution

    - by Steve Clements
    You want the RAD wonderfulness of a visual web part, but it needs to be deployed as a Sandboxed solution. Problem? No, SharePoint powertools for visual studio to the rescue!   http://goo.gl/pQ9ct   There are a couple limitations, read the above page, nothing major. e.g. 1. Javascript debugging is not supported 2. Debugging asp.net code is not supported. 3. Use of <% Assembly Src= is not supported   I understand it does it by adding the markup as an embedded resource, but I haven't actually tried it yet!  To come!

    Read the article

  • 12.10 Grub-customizer error

    - by SteveK
    I am trying to use grub-customizer in 12.10 which ran in 12.04. I now get error grub-mkconfig couldn't be executed successfully. error message: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.5.0-18-generic Found initrd image: /boot/initrd.img-3.5.0-18-generic Found linux image: /boot/vmlinuz-3.2.0-32-generic-pae Found initrd image: /boot/initrd.img-3.2.0-32-generic-pae Found linux image: /boot/vmlinuz-3.5.0-18-generic Found initrd image: /boot/initrd.img-3.5.0-18-generic Found linux image: /boot/vmlinuz-3.2.0-32-generic-pae Found initrd image: /boot/initrd.img-3.2.0-32-generic-pae Found memtest86+ image: /boot/memtest86+.bin Found memtest86+ image: /boot/memtest86+.bin Found Windows Recovery Environment (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 I have removed and reinstalled it to no avail. steve@steve-Ubuntu:~$ grub-mkconfig --version grub-mkconfig (GRUB) 2.00-7ubuntu11 I also noticed that file device.map does not exits but in other forums read that it is not in 12.10. Help please

    Read the article

  • Facebook Like button warning, but prevents sharing

    - by Steve
    I added a FB Like button to my Wordpress template, and when I click Like, I receive an error, which pops up and says: There was an error liking the page. If you are the page owner, please try running your page through the linter on the Facebook devsite (https://developers.facebook.com/tools/lint/) and fixing any errors. The Lint Checker (now) gives no Error or Warnings, after I removed a duplicate og:description tag. Why is it not working?

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • what is a squeeze page?

    - by Steve
    I've been reading a marketing book which suggests building a squeeze page to build an email list. Does this mean one of those long sales letter type pages with crumby styling? I'm assuming the styling does not have to be generic, or does it? Or, if the sales letter is not a squeeze page, what is a squeeze page? Is there an easy way to build one, and what considerations should be undertaken when building one?

    Read the article

  • Data Source Security Part 2

    - by Steve Felts
    In Part 1, I introduced the default security behavior and listed the various options available to change that behavior.  One of the key topics to understand is the difference between directly using database user and password values versus mapping from WLS user and password to the associated database values.   The direct use of database credentials is relatively new to WLS, based on customer feedback.  Some of the trade-offs are covered in this article. Credential Mapping vs. Database Credentials Each WLS data source has a credential map that is a mechanism used to map a key, in this case a WLS user, to security credentials (user and password).  By default, when a user and password are specified when getting a connection, they are treated as credentials for a WLS user, validated, and are converted to a database user and password using a credential map associated with the data source.  If a matching entry is not found in the credential map for the data source, then the user and password associated with the data source definition are used.  Because of this defaulting mechanism, you should be careful what permissions are granted to the default user.  Alternatively, you can define an invalid default user to ensure that no one can accidentally get through (in this case, you would need to set the initial capacity for the pool to zero so that the pool is populated only by valid users). To create an entry in the credential map: 1) First create a WLS user.  In the administration console, go to Security realms, select your realm (e.g., myrealm), select Users, and select New.  2) Second, create the mapping.  In the administration console, go to Services, select Data sources, select your data source name, select Security, select Credentials, and select New.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureCredentialMappingForADataSource.html for more information. The advantages of using the credential mapping are that: 1) You don’t hard-code the database user/password into a program or need to prompt for it in addition to the WLS user/password and 2) It provides a layer of abstraction between WLS security and database settings such that many WLS identities can be mapped to a smaller set of DB identities, thereby only requiring middle-tier configuration updates when WLS users are added/removed. You can cut down the number of users that have access to a data source to reduce the user maintenance overhead.  For example, suppose that a servlet has the one pre-defined, special WLS user/password for data source access, hard-wired in its code in a getConnection(user, password) call.  Every WebLogic user can reap the specific DBMS access coded into the servlet, but none has to have general access to the data source.  For instance, there may be a ‘Sales’ DBMS which needs to be protected from unauthorized eyes, but it contains some day-to-day data that everyone needs. The Sales data source is configured with restricted access and a servlet is built that hard-wires the specific data source access credentials in its connection request.  It uses that connection to deliver only the generally needed day-to-day information to any caller. The servlet cannot reveal any other data, and no WebLogic user can get any other access to the data source.  This is the approach that many large applications take and is the reasoning behind the default mapping behavior in WLS. The disadvantages of using the credential map are that: 1) It is difficult to manage (create, update, delete) with a large number of users; it is possible to use WLST scripts or a custom JMX client utility to manage credential map entries. 2) You can’t share a credential map between data sources so they must be duplicated. Some applications prefer not to use the credential map.  Instead, the credentials passed to getConnection(user, password) should be treated as database credentials and used to authenticate with the database for the connection, avoiding going through the credential map.  This is enabled by setting the “use-database-credentials” to true.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/ConfigureOracleParameters.html "Configure Oracle parameters" in Oracle WebLogic Server Administration Console Help. Use Database Credentials is not currently supported for Multi Data Source configurations.  When enabled, it turns off credential mapping on Generic and Active GridLink data sources for the following attributes: 1. identity-based-connection-pooling-enabled (this interaction is available by patch in 10.3.6.0). 2. oracle-proxy-session (this interaction is first available in 10.3.6.0). 3. set client identifier (this interaction is available by patch in 10.3.6.0).  Note that in the data source schema, the set client identifier feature is poorly named “credential-mapping-enabled”.  The documentation and the console refer to it as Set Client Identifier. To review the behavior of credential mapping and using database credentials: - If using the credential map, there needs to be a mapping for each WLS user to database user for those users that will have access to the database; otherwise the default user for the data source will be used.  If you always specify a user/password when getting a connection, you only need credential map entries for those specific users. - If using database credentials without specifying a user/password, the default user and password in the data source descriptor are always used.  If you specify a user/password when getting a connection, that user will be used for the credentials.  WLS users are not involved at all in the data source connection process.

    Read the article

  • can unbuntu 13.03 be loaded with flash drive? USB

    - by Steve Shaw
    I am wanting to do a split pc, half win xp, half unbuntu 13.04, want to use the linux for internet surfing, youtube, crackle, hulu videos viewing. My pc is a older DELL C521, 1.87ghz, 1.5 gb ram, 32bit, 80gb hd...will this be better than present slow slow slow win xp? need it for internet mostly. Would consider dumping win xp later on if I get the hang of the linux distro...any help appreciated. thanks

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >