Search Results

Search found 296 results on 12 pages for 'jeffrey knight'.

Page 3/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Contract Lifecycle Management for Public Sector

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} One thing Oracle never seems to get enough credit for is its consistent quest to improve its products, even the ones as established as its back-office solutions. Here is another example of one of the latest improvements: Contract Lifecycle Management for Public Sector, or CLM. The latest EBS module geared specifically for the Federal acquisition community. Our existing customers have been asking Oracle for years to upgrade its Advanced Procurement Suite to meet the complex procurement processes of the Federal Government. You asked; we listened. Oracle, with direct input from Federal agencies, subject matter experts, integration partners, and the Federal acquisition community, has expanded and deepened its procurement suite to meet the unique demands of the Federal acquisition community. New benefits/features include: Contract Line Item/Sub-Line Item (CLIN/SLIN) structures Configurable Document Numbering Complex Pricing Contract Types ( as per FAR Part 16) Option lines and exercising of options Incremental Funding capability Support for multiple document types (delivery orders, BPA call orders, awards, agreements, IDIQ contracts) Requisition lines to fund modifications Workload assignment and milestones Contract Action Reporting to FPDS-NG I’ve been conducting many tests over the past few months and have been quite impressed with the depth of features and the seamless integration to Federal Financials, specifically the funds control within the financials. Again, thank you for reading. If you have suggestions for future posts, please leave them in the comments section and I’ll take it from there.

    Read the article

  • Sprites, Primitives and logic entity as structs

    - by Jeffrey
    I'm wondering would it be considered acceptable: The window class is responsible for drawing data, so it will have a method: Window::draw(const Sprite&); Window::draw(const Rect&); Window::draw(const Triangle&); Window::draw(const Circle&); and all those primitives + sprites would be just public struct. For example Sprite: struct Sprite { float x, y; // center float origin_x, origin_y; float width, height; float rotation; float scaling; GLuint texture; Sprite(float w, float h); Sprite(float w, float h, float a, float b); void useTexture(std::string file); void setOrigin(float a, float b); void move(float a, float b); // relative move void moveTo(float a, float b); // absolute move void rotate(float a); // relative rotation void rotateTo(float a); // absolute rotation void rotationReset(); void scale(float a); // relative scaling void scaleTo(float a); // absolute scaling void scaleReset(); }; So instead of having each primitive to call their draw() function, which is a little bit off topic for their object, I let the Window class handle all the OpenGL stuff and manipulate them as simple objects that will be drawn later on. Is this pattern used? Does it have any cons against it's primitives-draw-themself pattern? Are there any other related patterns?

    Read the article

  • Why Does Adding a UDF or Code Truncates the # of Resources in List?

    - by Jeffrey McDaniel
    Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension.  (Query from nqquery log) select distinct 0 as c1,      D1.c1 as c2,      D1.c2 as c3,      D1.c3 as c4 from       (select distinct T10745.CURRENT_FLAG as c1,                T10745.RESOURCE_ID as c2,                T10745.RESOURCE_NAME as c3           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */            where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )       ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas. One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.   Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis. select distinct 0 as c1,      D1.c2 as c2,      D1.c3 as c3,      D1.c4 as c4,      D1.c5 as c5,      D1.c1 as c6 from       (select sum(T10754.AT_COMPLETION_UNITS) as c1,                T10706.CODE_VALUE_02 as c2,                T10745.CURRENT_FLAG as c3,                T10745.RESOURCE_ID as c4,                T10745.RESOURCE_NAME as c5           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,                W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,                W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */            where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )            group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG      ) D1 order by c4, c5, c3, c2 When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Unable to download microsoft excel files from a IIS SSL site

    - by Jeffrey
    The web master at my corporation added SSL to the web site and now none of my users can download Microsoft word and xcel files the sites generates. According to Microsoft the following must be down. Web sites that want to allow this type of operation should remove the no-cache header or headers. Typical of MS they don't tell you what to do, how to do it, or what the best practice is. The web master says its a web config setting. But all i can finds is <configuration> <appSettings/> <connectionStrings/> <system.web> <httpRuntime sendCacheControlHeader="false"/> and I don't know if this is the best way to achieve the result. I would greatly appreciate some advice on this subject.

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • Removing mdadm array and converting to regular disks while preserving data

    - by Jeffrey Kevin Pry
    I have a 6 disk (2TB each) mdadm RAID 5 volume created in Ubuntu 12.04 Server. However, I'm moving to a different solution and want to "unraid" my disks but keep the data. Only 50% is in use. From what I can surmise I basically have to do this recursively for each physical disk. Fail the disk Format the failed disk Move a portion of files to the new disk. Reshape the array Shrink the logical volume md0 This seems like a very time consuming process. Is there an easier way to do this (automatically perhaps) without buying new disks to temporarily hold the data? I am also aware that during this processing my RAID volume will be degraded and vulnerable the entire time. I am not too concerned about this and will be using battery backup and moving the most important files off first. Thank you for your help!

    Read the article

  • What functionality should I use in OpenGL 2.0?

    - by Jeffrey
    Considering OpenGL 2.1, we all know that glBegin and glEnd are the devil. Should I use only VBO to render 3d primitives (I can't find VAO in that version, weren't there already?)? Should I still use the matrix stack (why not?)? Should I still use glFrustum? Can I take advantage of shaders in GLSL 1.20? Where can I find a tutorial for VBO in OpenGL 2.1 and the "correct" way of programming in it? Also how am I supposed to animate something. Like a cube moving around an object or a player moving in the scene (static vbo data + shader?)? Note: Take your time to answer this question, I'll accept an answer tomorrow.

    Read the article

  • Partitioning Strategies for P6 Reporting Database

    - by Jeffrey McDaniel
    Prior to P6 Reporting Database version 3.2 sp1 range partitioning was used. This was applied only to the history tables. The ranges were defined during installation and additional ranges would need to be added once your date range entered the final defined range. As of P6 Reporting Database version 3.2 sp1, interval partitioning was implemented. Interval partitioning was applied to the existing History table as well as Slowly Changing Dimension tables. One of the major advantages of interval partitioning is there is no more manual addition of ranges. The interval partitioning will automatically create partitions for the defined interval when data is inserted into the table and it exceeds the existing partitions. In 3.2 sp1 there are steps on how to update your partitioning. For all versions after 3.2 sp1 interval partitioning is the only partitioning option used. When upgrading it is important to be aware of these changes. Here is a link with more information on partitioning -the types and the advantages. http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm

    Read the article

  • Example of DOD design

    - by Jeffrey
    I can't seem to find a nice explanation of the Data Oriented Design for a generic zombie game (it's just an example, pretty common example). Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombie list class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie } Or would creating a generic World container that contains every action from bite(zombieId, playerId) to moveTo(playerId, vector) to createPlayer() to shoot(playerId, vector) to face(radians)/face(vector); and contains: std::vector<zombie> std::vector<player> ... std::vector<mapchunk> ... std::vector<vbobufferid> player_run_animation; ... be a good example? Whats the proper way to organize a game with DOD?

    Read the article

  • Demo of Contract Lifecycle Management at OpenWorld 2012

    - by jeffrey.waterman
    Here is information for the demo station around CLM at OpenWorld 2012.  Be sure to check the main OpenWorld page for updates. Demo Stations Located in Moscone West 72 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle E-Business Suite Advanced Procurement Purchasing and Services Procurement iProcurement Contract Lifecycle Management for Public Sector Booth W-122 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Announcement: Federal Financial Briefing

    - by jeffrey.waterman
    Dear Oracle/PeopleSoft Federal Financial Management Customers: Oracle is pleased to announce that we will conduct the next Federal Financial Management Briefing on Tuesday, April 17th from 8:30 am until 2:00 pm at the Oracle Campus in Reston, Virginia. The Registration Link and Agenda can be found at the web site below: Federal Financial Briefing: Register Here Directions to Oracle Reston: From the Beltway take the Dulles Toll Road (Route 267 West). Do not get on the Dulles Access Road or you will not be able to exit until you get to the airport. Take the Reston Parkway Exit (Exit 12). At the end of the exit ramp, turn right onto Reston Parkway. Make your first right onto Sunset Hills Road. Take a Right turn onto Oracle Way and park in Visitor Parking. The receptionist will direct you to the CAB.

    Read the article

  • Plan for your OpenWorld experience

    - by jeffrey.waterman
    Here is a partial list of the events which will take place at Oracle OpenWorld 2012, please take time out of your conference activities to get to these important, and informative, events: Attend the Sessions: General Session: Public Sector Wednesday, 3 October 10:15 a.m. – 11:15 a.m. Westin San Francisco Market Street – Metropolitan III Oracle Exadata, Oracle Exalogic, Oracle Exalytics, and Big Data Solutions in the Public Sector Wednesday, 3 October 11:45 a.m. – 12:45 p.m. Westin San Francisco Market Street – City Room Best Practices in the Use of Middleware for Information Sharing Across Agencies Wednesday, 3 October 1:15 p.m. – 2:15 p.m. Westin San Francisco Market Street – City Room Upgrading PeopleSoft Applications in the Public Sector Wednesday, 3 October 1:15 p.m. – 2:15 p.m. Westin San Francisco Market Street – Franciscan I Shared Services in Public Sector Organizations Wednesday, 3 October 3:30 p.m. – 4:30 p.m. Westin San Francisco Market Street – City Room Achieving Agility Through Closed-Loop Oracle Policy Automation Wednesday, 3 October 5:00 p.m. – 6:00 p.m. Westin San Francisco Market Street – Franciscan I The Value of Oracle E-Business Suite in the Public Sector Wednesday, 3 October 5:00 p.m. – 6:00 p.m. Westin San Francisco Market Street – City Ballroom Public Sector Reception Monday, 1 October 6:30pm – 9:30 pm Jillian’s, 101 Fourth Street

    Read the article

  • Use of list-unsubscribe to improve inbox delivery

    - by Jeffrey Simon
    To overcome email being classified as spam by Gmail, Google recommends a number of steps, which we have implemented (namely SPF, DKIM, and Precedence: bulk). One additional measure they recommend at https://support.google.com/mail/bin/answer.py?hl=en&answer=81126#authentication reads as follows: Because Gmail can help users automatically unsubscribe from your email, we strongly recommend the following: Provide a 'List-Unsubscribe' header which points to an email address where the user can unsubscribe easily from future mailings (Note: This is not a substitute method for unsubscribing). Documentation for List-Unsubscribe is found at http://www.list-unsubscribe.com/. From this documentation I expect a button to be provided by a supported mail client. I have tested the 'List-Unsubscribe' header and it does not appear to provide the button. I have tested in both Gmail and OS X Mail. I tested with an http address and with both an email address and an http address. The format of the header is as follows: List-Unsubscribe: <mailto:[email protected]>, <http://domain.com/member/unsubscribe/[email protected]?id=12345N> No button appears in any test. My questions: How widely is List-Unsubscribe supported? Should a button be appearing somewhere, or does something else have to be present? I have seen a comment that even if the button is not present, services like Gmail, Yahoo, Hotmail/Windows Live would give higher regard to email having the header. Thus it might be worthwhile for this aspect alone. Please note that our standard email footer already contacts instructions and a link to allow unsubscribing from our email. Finally, is it worth while to implement this header? (That is, any downsides?)

    Read the article

  • How activity id affects calculations such as schedule % complete when using a baseline?

    - by Jeffrey McDaniel
    Fields such as schedule % complete, planned value costs, etc. that use a baseline to help determine the value depend on the activity id's to match between the baseline project and the current project. If the activity id is changed the link is broken. In the P6 power client there is an internal guid that allows you to change the activity id in either the baseline or current project and still have these values related. In the P6 Reporting Database the activity id is used as the joining characteristic between which activities are a match between a baseline project and a current project.

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • How are vertex shader outs sent as inputs to the fragment shader?

    - by Jeffrey
    I'm learning some OpenGL 3.2 way of doing things and I think it's quite great, I'm actually understanding more of shaders and non-fixed pipeline in 1 week rather than those 2 years I tried to learn OpenGL fixed pipeline functions. But here's my question: From what I think I've understood the vertex shader is run for each vertexes in the VBO. But the fragments shader is run per each pixel (is that right?) which is a huge number compared to let's say 3 vertexes of a triangle. Now it seems that in the vertex shader the out variables (like colors and stuff) are passed 1 to 1 to the fragment shader. But let's say that I pass to the fragment shader the position of the vertex in the vertex shader. How is all executed? What vertex (A, B or C of the hipothetical triangle) is passed per each fragment and why?

    Read the article

  • my mouse blinks!

    - by Jeffrey
    hello I'm new in linux systems and I have the same problem in every distro... the thing is that my mouse every time i move it or do anything with it blinks!!! I think it's a xorg problem or a driver problem and the only way it does not do that is when i start with the low-resolution error using a xorg.conf from another graphic chipset and i think that isn't the best choise xDD please if you can help me I have a ati rage 128, thanks a lot for your help people n.n

    Read the article

  • Nvidia Powermizer Performance Levels

    - by jeffrey
    Is there anyway to configure Nvidia Powerimizer performance levels? My current setup has 3 power levels with the lowest one being 50mhz. The problem with this it that it lags compiz when it goes to the lowest performance level 0. Minimizing, maximizing, dragging windows, etc. are all sluggish when it's at the lowest level. Once powermizer leaves level 0 everything is very smooth and runs great. Is there anyway for me to remove level 0 and just run Level the two higher levels 1/2? I don't want to complete disable powermizer, but I can't stand the lagging once powermizer goes into performance level 0. Setting the option "prefer maximum performance" fixes the problem as it disables powermizer, but the GPU is overkill at stock speeds for most desktop use @ 850mhz. intel i5 2500k asus gene-z z68 evga 560ti fpb (driver 295.40) ubuntu 12.04 LTS x64

    Read the article

  • "Unable to associated Elastic IP with cluster" in Eclipse Plugin Tutorial

    - by Jeffrey Chee
    Hi all, I am currently trying to evaluate AWS for my company and was trying to follow the tutorials on the web. http://developer.amazonwebservices.com/connect/entry.jspa?externalID=2241 However I get the below error during startup of the server instance: Unable to associated Elastic IP with cluster: Unable to detect that the Elastic IP was orrectly associated. java.lang.Exception: Unable to detect that the Elastic IP was correctly associated at com.amazonaws.ec2.cluster.Cluster.associateElasticIp(Cluster.java:802) at com.amazonaws.ec2.cluster.Cluster.start(Cluster.java:311) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.launch(ElasticClusterBehavior.java:611) at com.amazonaws.eclipse.wtp.Ec2LaunchConfigurationDelegate.launch(Ec2LaunchConfigurationDelegate.java:47) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:853) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:703) at org.eclipse.debug.internal.core.LaunchConfiguration.launch(LaunchConfiguration.java:696) at org.eclipse.wst.server.core.internal.Server.startImpl2(Server.java:3051) at org.eclipse.wst.server.core.internal.Server.startImpl(Server.java:3001) at org.eclipse.wst.server.core.internal.Server$StartJob.run(Server.java:300) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Then after a while, another error occur: Unable to publish server configuration files: Unable to copy remote file after trying 4 timeslocal file: 'XXXXXXXX/XXX.zip' Results from first attempt: Unexpected exception: java.net.ConnectException: Connection timed out: connect root cause: java.net.ConnectException: Connection timed out: connect at com.amazonaws.eclipse.ec2.RemoteCommandUtils.copyRemoteFile(RemoteCommandUtils.java:128) at com.amazonaws.eclipse.wtp.tomcat.Ec2TomcatServer.publishServerConfiguration(Ec2TomcatServer.java:172) at com.amazonaws.ec2.cluster.Cluster.publishServerConfiguration(Cluster.java:369) at com.amazonaws.eclipse.wtp.ElasticClusterBehavior.publishServer(ElasticClusterBehavior.java:538) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:866) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Can anyone point me to what I'm doing wrong? I followed the tutorials and the video tutorials on youtube exactly. Best Regards ~Jeffrey

    Read the article

  • likewise-open and samba as pdc

    - by Knight Samar
    Hi, We have successfully implemented a Samba Primary Domain Controller for a hybrid Windows-Linux environment. So now I am setting up dual-boot clients with Windows XP and Ubuntu 9.10. Windows XP can be easily added to the Samba Domain. Everything is manageable. No worries. But when I try using likewise-open 4.1 to add the Ubuntu 9.10 to the samba domain, it cannot locate the domain controller. domainjoin-cli --loglevel verbose join MYDOMAIN root Error: Unable to resolve DC name [code 0x00080026] Resolving 'MYDOMAIN' failed. Check that the domain name is correctly entered. Also check that your DNS server is reachable, and that your system is configured to use DNS in nsswitch. I even tried mydomain.com variations but to no avail. What am I missing ? I read up a document on MSDN wherein it says that the Domain Controller creates some SRV records in the DNS server. I guess, I don't have them on my BIND. Do you think that is the problem ? If yes, can anyone please point out how and what SRV records need to be added. Thanks.

    Read the article

  • Uninstall Dell Wave

    - by Onion-Knight
    The image we put on our company laptops includes the Dell Wave interface for Biometric log in. The Wave UI increases boot time by about 5 minutes (because it loads the fingerprint database(a feature I don't use)), so I'm trying to uninstall it, but with little success. There is no line-item in the Add/Remove Programs menu to formally delete it, nor is there a Service I can stop/remove to disable the Wave UI. I've tried looking online, but all I find instead are hits for Google Wave and virus-removal forums with HyjackThis dumps that include Dell Wave records. Any ideas?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >