Search Results

Search found 429 results on 18 pages for 'accounting'.

Page 16/18 | < Previous Page | 12 13 14 15 16 17 18  | Next Page >

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Developing Schema Compare for Oracle (Part 6): 9i Query Performance

    - by Simon Cooper
    All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time. When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required. As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 108K| 939 || 1 | SORT ORDER BY | | 108K| 939 || 2 | NESTED LOOPS OUTER | | 108K| 938 ||* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 || 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 ||* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 || 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 ||* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 || 35 | VIEW | ALL_MVIEWS | 51 | 7 || 58 | NESTED LOOPS OUTER | | 39104 | 748 || 59 | VIEW | ALL_TABLES | 6704 | 668 || 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 || 106 | VIEW | ALL_PART_TABLES | 277 | 11 |------------------------------------------------------------------------------- And the same query on 9i: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 16P| 55G|| 1 | SORT ORDER BY | | 16P| 55G|| 2 | NESTED LOOPS OUTER | | 16P| 862M|| 3 | NESTED LOOPS OUTER | | 5251G| 992K|| 4 | NESTED LOOPS OUTER | | 4243M| 2578 || 5 | NESTED LOOPS OUTER | | 2669K| 1440 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 ||* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | ||* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| ||* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| ||* 96 | VIEW | ALL_PART_TABLES | 852K| |------------------------------------------------------------------------------- Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost? After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan: -------------------------------------------------------------------------------| Id | Operation | Name | Bytes | Cost |-------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7587K| 3704 || 1 | SORT ORDER BY | | 7587K| 3704 ||* 2 | HASH JOIN OUTER | | 7587K| 822 ||* 3 | HASH JOIN OUTER | | 5262K| 616 ||* 4 | HASH JOIN OUTER | | 2980K| 465 ||* 5 | HASH JOIN OUTER | | 710K| 432 ||* 6 | HASH JOIN OUTER | | 398K| 302 || 7 | VIEW | ALL_TABLES | 342K| 276 || 29 | VIEW | ALL_MVIEWS | 51 | 20 || 50 | VIEW | ALL_PART_TABLES | 852K| 104 || 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 || 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 || 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |------------------------------------------------------------------------------- That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however. 10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows. And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input. To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed. All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join. To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population). Next: fun with the 9i dictionary views.

    Read the article

  • "Guiding" a Domain Expert to Retire from Programming

    - by James Kolpack
    I've got a friend who does IT at a local non-profit where they're using a custom web application which is no longer supported by the company who built it. (out of business, support was too expensive, I'm not sure...) Development on this app started around 10+ years ago so the technologies being harnessed are pretty out of date now - classic asp using vbscript and SQL Server 2000. The application domain is in the realm of government bookkeeping - so even though the development team is long gone, there are often new requirements of this software. Enter the... The domain expert. This is an middle aged accounting whiz without much (or any?) prior development experience. He studied the pages, code and queries and learned how to ape the style of the original team which, believe me, is mediocre at best. He's very clever and very tenacious but has no experience in software beyond what he's picked up from this app. Otherwise, he's a pleasant guy to talk to and definitely knows his domain. My friend in IT, and probably his superiors in the company, want him out of the code. They view him as wasting his expertise on coding tasks he shouldn't be doing. My friend got me involved with a few small contracts which I handled without much problem - other than somewhat of a communication barrier with the domain expert. He explained the requirements very quickly, assuming prior knowledge of the domain which I do not have. This is partially his normal style, and I think maybe a bit of resentment from my involvement. So, I think he feels like the owner of the code and has entrenched himself in a development position. So... his coding technique. One of his latest endeavors was to make a page that only he could reach (theoretically - the security model for the system is wretched) where he can enter a raw SQL query, run it, and save the query to run again later. A report that I worked on had been originally implemented by him using 6 distinct queries, 3 or 4 temp tables to coordinate the data between the queries, and the final result obtained by importing the data from the final query into Access and doing a pivot and some formatting. It worked - well, some of the results were incorrect - but at what a cost! (I implemented the report in a single query with at least 1/10th the amount of code.) He edits code in notepad. He doesn't seem to know about online reference material for the languages. I recently read an article on Dr. Dobbs titled "What Makes Bad Programmers Different" - and instantly thought of our domain expert. From the article: Their code is large, messy, and bug laden. They have very superficial knowledge of their problem domain and their tools. Their code has a lot of copy/paste and they have very little interest in techniques that reduce it. The fail to account for edge cases, while inefficiently dealing with the general case. They never have time to comment their code or break it into smaller pieces. Empirical evidence plays no little role in their decisions. 5.5 out of 6. My friend is wanting me to argue the case to their management - specifically, I got this email from their manager to respond to: ...Also, I need to talk to you about what effect there is from Domain Expert continuing to make edits to the live environment. If that is a problem for you I need to know so I can have his access blocked. Some examples would help. In my opinion, from a technical standpoint, it's dangerous to have him making changes without any oversight. On the other hand, I'm just doing one-off contracts at this point and don't have much desire to get involved deeply enough that I'm essentially arguing as one of the Bobs from Office Space. I'd like to help my friend out - but I feel like I'm getting in the middle of a political battle. More importantly - if I do get involved and suggest that his editing privileges be removed, it needs to be handled carefully so that doesn't feel belittled. He is beyond a doubt the foremost expert on this system. I'm hoping this is familiar territory for some other stackechangers, because I'm feeling a little bewildered. How should I respond? Should I argue that he shouldn't be allowed to touch the code? Should I phrase it as "no single developer, no matter how experienced, should be working on production code unchecked"? Should I argue to keep him involved with the code, but with a review process? Should I say "glad I could help, but uh, I'm busy now!" Other options? Thanks a bunch!

    Read the article

  • Detect Client Computer name when an RDP session is open

    - by Ubiquitous Che
    Hey all, My manager has pointed out to me a few nifty things that one of our accounting applications can do because it can load different settings based on the machine name of the host and the machine name of the client when the package is opened in an RDP session. We want to provide similar functionality in one of my company's applications. I've found out on this site how to detect if I'm in an RDP session, but I'm having trouble finding information anywhere on how to detect the name of the client computer. Any pointers in the right direction would be great. I'm coding in C# for .NET 3.5 EDIT The sample code I cobbled together from the advice below - it should be enough for anyone who has a use for the WTSQuerySessionInformation to get a feel for what's going on. Note that this isn't necessarily the best way of doing it - just a starting point that I've found useful. When I run this locally, I get boring, expected answers. When I run it on our local office server in an RDP session, I see my own computer name in the WTSClientName property. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace TerminalServicesTest { class Program { const int WTS_CURRENT_SESSION = -1; static readonly IntPtr WTS_CURRENT_SERVER_HANDLE = IntPtr.Zero; static void Main(string[] args) { StringBuilder sb = new StringBuilder(); uint byteCount; foreach (WTS_INFO_CLASS item in Enum.GetValues(typeof(WTS_INFO_CLASS))) { Program.WTSQuerySessionInformation( WTS_CURRENT_SERVER_HANDLE, WTS_CURRENT_SESSION, item, out sb, out byteCount); Console.WriteLine("{0}({1}): {2}", item.ToString(), byteCount, sb); } Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } [DllImport("Wtsapi32.dll")] public static extern bool WTSQuerySessionInformation( IntPtr hServer, int sessionId, WTS_INFO_CLASS wtsInfoClass, out StringBuilder ppBuffer, out uint pBytesReturned); } enum WTS_INFO_CLASS { WTSInitialProgram = 0, WTSApplicationName = 1, WTSWorkingDirectory = 2, WTSOEMId = 3, WTSSessionId = 4, WTSUserName = 5, WTSWinStationName = 6, WTSDomainName = 7, WTSConnectState = 8, WTSClientBuildNumber = 9, WTSClientName = 10, WTSClientDirectory = 11, WTSClientProductId = 12, WTSClientHardwareId = 13, WTSClientAddress = 14, WTSClientDisplay = 15, WTSClientProtocolType = 16, WTSIdleTime = 17, WTSLogonTime = 18, WTSIncomingBytes = 19, WTSOutgoingBytes = 20, WTSIncomingFrames = 21, WTSOutgoingFrames = 22, WTSClientInfo = 23, WTSSessionInfo = 24, WTSSessionInfoEx = 25, WTSConfigInfo = 26, WTSValidationInfo = 27, WTSSessionAddressV4 = 28, WTSIsRemoteSession = 29 } }

    Read the article

  • Using VCL for the web (intraweb) as a trick for adding web interface to a legacy non-tiered (2 tiers

    - by user193655
    My team is maintaining a huge Client Server win32 Delphi application. It is a client/server application (Thick client) that uses DevArt (SDAC) components to connect to SQL Server. The business logic is often "trapped" in Component's event handlers, anyway with some degree of refactoring it is doable to move the business logic in common units (a big part of this work has already been done during refactoring... Maintaing legacy applications someone else wrote is very frustrating, but this is a very common job). Now there is the request of a web interface, I have several options of course, in this question i want to focus on the VCL for the web (intraweb) option. The idea is to use the common code (the same pas files) for both the client/server application and the web application. I heard of many people that moved legacy apps from delphi to intraweb, but here I am trying to keep the Thick client too. The idea is to use common code, may be with some compiler directives to write specific code: {$IFDEF CLIENTSERVER} {here goes the thick client specific code} {$ELSE} {here goes the Intraweb specific code} {$ENDIF} Then another problem is the "migration plan", let's say I have 300 features and on the first release I will have only 50 of them available in the web application. How to keep track of it? I was thinking of (ab)using Delphi interfaces to handle this. For example for the User Authentication I could move all the related code in a procedure and declare an interface like: type IUserAuthentication= interface['{0D57624C-CDDE-458B-A36C-436AE465B477}'] procedure UserAuthentication; end; In this way as I implement the IUserAuthentication interface in both the applications (Thick Client and Intraweb) I know that That feature has been "ported" to the web. Anyway I don't know if this approach makes sense. I made a prototype to simulate the whole process. It works for a "Hello world" application, but I wonder if it makes sense on a large application or this Interface idea is only counter-productive and can backfire. My question is: does this approach make sense? (the Interface idea is just an extra idea, it is not so important as the common code part described above) Is it a viable option? I understand it depends a lot of the kind of application, anyway to be generic my one is in the CRM/Accounting domain, and the number of concurrent users on a single installation is typically less than 20 with peaks of 50. EXTRA COMMENT (UPDATE): I ask this question because since I don't have a n-tier application I see Intraweb as the unique option for having a web application that has common code with the thick client. Developing webservices from the Delphi code makes no sense in my specific case, so the alternative I have is to write the web interface using ASP.NET (duplicating the business logic), but in this case I cannot take advantage of the common code in an easy way. Yes I could use dlls maybe, but my code is not suitable for that.

    Read the article

  • Problem in SharePoint Object model when accessing the sharepoint list items?

    - by JanardhanReddy
    just i wrote using (SPSite site = SPContext.Current.Site) { using (SPWeb web = site.OpenWeb()) { //SPList lst = web.Lists["ManagerInfo"]; SPList lst = web.Lists[strlist]; SPQuery getUserNameQuery = new SPQuery(); // getUserNameQuery.Query = "<Where><And><Eq><FieldRef Name=\"Region\" /><Value Type=\"Text\">" + strRegion + "</Value></Eq><And><Eq><FieldRef Name=\"PM_x0020_First_x0020_Name\" /><Value Type=\"Text\">" + pmFName + "</Value></Eq><Eq><FieldRef Name=\"PM_x0020_Last_x0020_Name\" /><Value Type=\"Text\">" + pmLname + "</Value></Eq></And></And></Where>"; // getUserNameQuery.Query = "<Where><And><Eq><FieldRef Name=\"PM_x0020_First_x0020_Name\" /><Value Type=\"Text\">" + pmFName + "</Value></Eq><Eq><FieldRef Name=\"PM_x0020_Last_x0020_Name\" /><Value Type=\"Text\">" + pmLname + "</Value></Eq></And></Where>"; getUserNameQuery.Query = "<Where><Eq><FieldRef Name=\"PM_x0020_Name\" /><Value Type=\"Text\">" + loginName + "</Value></Eq></Where>"; SPListItemCollection items = lst.GetItems(getUserNameQuery); foreach (SPListItem item in items) { managerFName = item["Manager Name"].ToString(); strAccounting = item["Accounting"].ToString(); managerFName = managerFName.Replace(".", " "); strAccounting = strAccounting.Replace(".", " "); // isFound = true; XPathNavigator managerName = MainDataSource.CreateNavigator().SelectSingleNode("/my:myFields/my:txtManagerName", NamespaceManager); managerName.SetValue(managerFName); XPathNavigator accountingName = MainDataSource.CreateNavigator().SelectSingleNode("/my:myFields/my:txtAccountingName", NamespaceManager); accountingName.SetValue(strAccounting); } } } i used this code in infopath this infopath is using by all users.os when the current login user have no permissions to the list it showing error.when the current login user have full Permission it is working. So Please advise me what can i do inorder to work them for all users.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • I'm searching for a messaging platform (like XMPP) that allows tight integration with a web applicat

    - by loxs
    Hi, At the company I work for, we are building a cluster of web applications for collaboration. Things like accounting, billing, CRM etc. We are using a RESTfull technique: For database we use CouchDB Different applications communicate with one another and with the database via http. Besides, we have a single sign on solution, so that when you login in one application, you are automatically logged to the other. For all apps we use Python (Pylons). Now we need to add instant messaging to the stack. We need to support both web and desktop clients. But just being able to chat is not enough. We need to be able to achieve all of the following (and more similar things). When somebody gets assigned to a task, they must receive a message. I guess this is possible with some system daemon. There must be an option to automatically group people in groups by lots of different properties. For example, there must be groups divided both by geographical location, by company division, by job type (all the programers from different cities and different company divisions must form a group), so that one can send mass messages to a group of choice. Rooms should be automatically created and destroyed. For example when several people visit the same invoice, a room for them must be automatically created (and they must auto-join). And when all leave the invoice, the room must be destroyed. Authentication and authorization from our applications. I can implement this using custom solutions like hookbox http://hookbox.org/docs/intro.html but then I'll have lots of problems in supporting desktop clients. I have no former experience with instant messaging. I've been reading about this lately. I've been looking mostly at things like ejabberd. But it has been a hard time and I can't find whether what I want is possible at all. So I'd be happy if people with experience in this field could help me with some advice, articles, tales of what is possible etc.

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • Improving the Industry’s Best Cloud Project Portfolio Management (PPM) Solution – New Release of Instantis EnterpriseTrack

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Yasser Mahmud, Vice President of Product Strategy & Industry Marketing, Oracle Primavera We know that in today’s rapidly changing world, organizations and leaders must adapt to fierce competition, business climate change and customers consistently demanding more for less. And project portfolio management (PPM) initiatives are a key component to help organizations thrive and stand out among competitors. That’s why I’m excited to announce Instantis EnterpriseTrack 8.5. Since Oracle’s acquisition of Instantis late last year, we’ve been busy working to enhance the leading cloud PPM solution. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here’s what’s new: Perform more precise resource planning and management  Gain more precise capacity visibility for resource planning and project execution with resource calendars that capture vacation, LOA and part-time resource availability Ensure compliance and governance processes  with activity labor cost capitalization Improve project labor cost estimation, tracking and administration with variable resource rates Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Optimize Project Demand Management And Execution Enhance productivity and analysis with project request flexible staffing plan and simplified finance estimation Improve project status communication and execution with estimated time to complete (ETC) in timesheets and projects Achieve audit compliance and governance with field change history for key project and project request fields Enforce proper financial accounting processes with the new strict finance lock/close period option Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Improve Reporting and the User Experience Enhance user productivity and analysis with improved listing pages Improve program reporting with new program filters in listing pages and reports Run large data volume user defined Excel reports with MS Excel 2010 support Accelerate user productivity and satisfaction with an improved user interface for project issues, risks, and scope changes Enjoy faster system response and improved user experience with  optimized listing pages, resource planning, and application cache Deliver user self-service training on demand with UPK support And if that wasn’t enough, we’ve also made additional improvements to timesheets, field change history and finance lock/close period. Learn more about Instantis EnterpriseTrack 8.5.

    Read the article

  • UPK Pre-Built Content Update

    - by Karen Rihs
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} UPK pre-built content development efforts are always underway and growing. Over the last few months, the following new, upgraded, and revised modules became available:  NEW CONTENT RELEASES E-Business Suite 12.1 Install Base Process Manufacturing, Process Quality Fundamentals for EBS Fusion 11g Release 1 Receivables Assets Purchasing Distributed Order Orchestration Payables Functional Setup Manager Project Portfolio Management Self Service Procurement JDE E1 9.0 Accounts Payable 9.0 with 9.1 Tools Fundamentals 9.0 with 9.1 Tools General Ledger 9.0 with 9.1 Tools Accounts Receivable 9.0 with 9.1 Tools Procurement and Subcontract Management 9.0 with 9.1 Tools Oracle Utilities Customer Care and Billing 2.3.1 Administrative Setup User Tasks Primavera Primavera Contract Management 14 Primavera P6 Enterprise Project Portfolio Management 8.2 UPK CONTENT UPGRADES Agile CNM 1.2 Customer Needs Management E-Business Suite 12.1 Project Foundation JDE E1 9.1 Fixed Assets Accounting General Ledger Fundamentals Inventory Management Sales Order Management PeopleSoft 9.1 Reporting Tools for PeopleTools 8.5.2  UPK CONTENT REVISIONS Oracle Utilities for Meter Data Management 2.0.1 Administrative Setup User Tasks VEE and Usage Rules Working with Measurement Data PeopleSoft 9.0 and 9.1 Enterprise Learning Management Reporting Tools for HCM (previously Reporting Tools for HRMS) PeopleSoft 9.1 Expenses General Ledger Inventory Contracts Grants Strategic Sourcing For a list of modules currently available for each product line, visit the UPK Resource Library on Oracle.com. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information on how your organization can take advantage of UPK pre-built content, see our previous blog,  Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The Value of UPK Pre-Built Content. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} - Karen Rihs, UPK Outbound Product Management

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Getting your bearings and defining the project objective

    - by johndoucette
    I wrote this two years ago and thought it was worth posting… Some may think this is a daunting task and some may even say “what a waste of time” and want to open MS Project and start typing out tasks because someone asked for an estimate and a task list. Hell, maybe you even use Excel and pump out a spreadsheet with some real scientific formula for guessing how long it will take to code a bunch of classes. However, this short exercise will provide the basis for the entire project, whether small or large and be a great friend when communicating to anyone on your team or even your client. I call this the Project Brief. If you find yourself going beyond a single page, then you must decompose the sections and summarize your findings so there is a complete and clear picture of the project you are working on in a relatively short statement. Here is a great quote from the PMBOK (Project Management Body of Knowledge) relative to what a project is;   A project is a temporary endeavor undertaken to create a unique product, service or result. With this in mind, the project brief should encompass the entirety (objective) of the endeavor in its explanation and what it will take (goals) to create the product, service or result (deliverables). Normally the process of identifying the project objective is done during the first stage of a project called the Project Kickoff, but you can perform this very important step anytime to help you get a bearing. There are many more parts to helping a project stay on course, but this is usually the foundation where it can be grounded on. Through a series of 3 exercises, you should be able to come up with the objective, goals and deliverables on your project. Follow these steps, and in no time (about &frac12; hour), you will have the foundation of your project plan. (See examples below) Exercise 1 – Objectives Begin with the end in mind. Think about your project in business terms with a couple things to help you understand the objective; Reference the business benefit in terms of cost, speed and / or quality, Provide a higher level of what the outcome will look like (future sense) It should be non-measurable, that’s what the goals are all about The output should be a single paragraph with three sentences and take 10 minutes to write. *Typically, agreement must be reached on the objectives of the project before you would proceed to the next steps of the project. Exercise 2 – Goals A project goal is a statement that answers questions about who, what, why, where and when. A good project goal statement; Answers the five “W” questions for the project Is measurable in each of its parts Is published and agreed on by all the owners This helps the Project Manager receive confirmation on defining the project target. Using the established project objective done in the first exercise, think about the things it will take to get the job done. Think about tangible activities which are the top level tasks in a typical Work Breakdown Structure (WBS). The overall goal statement plus all the deliverables (next exercise) can be seen as the project team’s contract with the project owners. Write 3 - 5 goals in about 10 minutes. You should not write the words “Who, what, why, where and when, but merely be able to answer the questions when you read a goal. Exercise 3 – Deliverables Every project creates some type of output and these outputs are called deliverables. There are two classes of deliverables; Internal – produced for project team members to meet their goals External – produced for project owners to meet their expectations The list you enter here provides a checklist for the team’s delivery and/or is a statement of all the expectations of the project owners. Here are some typical project deliverables; Product and product documentation End product/system Requirements/feature documents Installation guides Demo/prototype System design documents User guides/help files Plans Project plan Training plan Conversion/installation/delivery plan Test plans Documentation plan Communication plan Reports and general documentation Progress reports System acceptance tests Outstanding bug list Procedures Risk and issue logs Project history Deliverables should go with each of the goals. Have 3-5 deliverables for each goal. When you are done, you will have established a great foundation for the clarity of your project. This exercise can take some time, but with practice, you should be able to whip this one out in 10 minutes as well, especially if you are intimate with an ongoing project. Samples  Objective [Client] is implementing a series of MOSS sites to support external public (Internet), internal employee (Intranet) and an external secure (password protected Internet) applications. This project will focus on the public-facing web site and will provide [Client] with architectural recommendations based on the current design being done by their design partner [Partner] and the internal Content Team. In addition, it will provide [Client] with a development plan and confidence they need to deploy a world class public Internet website. Goals 1.  [Consultant] will provide technical guidance and set project team expectations for the implementation of the MOSS Internet site based on provided features/functions within three weeks. 2.  [Consultant] will understand phase 2 secure password-protected Internet site design and provide recommendations.   Deliverables 1.1  Public Internet (unsecure) Architectural Recommendation Plan 1.2  Physical Site construction Work Breakdown Structure and plan (Time, cost and resources needed) 2.1  Two Factor authentication recommendation document   Objective [Client] is currently using an application developed by [Consultant] many years ago called "XXX". This application, although functional, does not meet their new updated business requirements and contains a few defects which [Client] has developed work-around processes. [Client] would like to have a "new and improved" system to support their membership management needs by expanding membership and subscription capabilities, provide accounting integration with internal (GL) and external (VeriSign) systems, and implement hooks to the current CRM solution. This effort will take place through a series of phases, beginning with envisioning. Goals 1. Through discussions with users, [Consultant] will discover current issues/bugs which need to be resolved which must meet the current functionality requirements within three weeks. 2. [Consultant] will gather requirements from the users about what is "needed" vs. "what they have" for enhancements and provide a high level document supporting their needs. 3. [Consultant] will meet with the team members through a series of meetings and help define the overall project plan to deliver a new and improved solution. Deliverables 1.1 Prioritized list of Current application issues/bugs that need to be resolved 1.2 Provide a resolution plan on the issues/bugs identified in the current application 1.3 Risk Assessment Document 2.1 Deliver a Requirements Document showing high-level [Client] needs for the new XXX application. · New feature functionality not in the application today · Existing functionality that will remain in the new functionality 2.2 Reporting Requirements Document 3.1 A Project Plan showing the deliverables and cost for the next (second) phase of this project. 3.2 A Statement of Work for the next (second) phase of this project. 3.3 An Estimate of any work that would need to follow the second phase.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • 7 Good Reasons to Upgrade E-Business Suite to the cloud

    - by Lisa Schwartz
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} As promised here is blog Part 2: Why Upgrade to Oracle E-Business Suite 12 in the cloud? 7 Good Reasons to Upgrade to E-Business Suite 12 in the Cloud: 1)   Take advantage of new and improved features: from global sub-ledger accounting to mobile access for supply chain management to built-in extensions for information search and discovery. If you haven’t checked out the latest features yet, there are over 1000 EBS 12 enhancements. 2) Plan now to address any ongoing Oracle Support considerations and regulatory compliance requirements. EBS Release 11 support is ending soon. Based upon that information alone, you should have an EBS upgrade strategy and planning well underway. 3) Customizations got you worried? Expedite your next Oracle E-Business Suite upgrade – have Oracle identify all customizations, reduce un-needed customizations (EBS 12 has built-in many of your customizations) and during the upgrade keep all necessary customizations to run your business. 4) Migrating EBS to the cloud allows parallel migration and testing. Therefore no extra hardware purchases for the testing and upgrade. Business disruption is minimized. And, by moving to the cloud, this provides for smoother future upgrades that are based on your own timeline. 5) Oracle Experts will upgrade and run your EBS applications for you in the cloud. Free your IT resources to develop new services and work on projects that are critical to business innovation and competitiveness. Your IT resources will not be inundated with upgrade tasks!      6) Reallocate precious IT dollars to other projects, eliminate CapEx costs. 7) Oracle minimizes business risk by having enterprise class cloud services under stringent SLAs designed to run your business applications for you such as: a. Enterprise grade infrastructure b. World-class security and identity management c. Best practices in regulatory compliance: from classified federal gov’t standards, to healthcare HIPPA standards to meeting Financial Services requirements (PCI DSS) Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 7 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Next Step: To help you upgrade and get to the cloud in the shortest period of  time, Oracle has a program called Oracle Upgrade Factory for Oracle E-Business Suite 12. It offers a unique approach, seamlessly bundling Managed Cloud Services and Oracle Consulting Services together for an entire Oracle E-Business Suite upgrade and migration to a managed private  cloud. Read the Oracle Upgrade Factory Solution Brief here. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Chart Filtering

    - by Tim Dexter
    Interesting question from a colleague this week. Can you add a filter to a chart to just show a specific set of data? In an RTF template, you need to do a little finagling in the chart definition. In an online template, a couple of clicks and you're done. RTF Build your chart as you would normally to include all the data to start with. Now flip to the Advanced tab to see the code behind the chart. Its not very pretty but with a little effort you can get it looking a little more friendly. Here's my chart showing employees and their salaries. <Graph depthAngle="50" depthRadius="8" seriesEffect="SE_AUTO_GRADIENT"> <LegendArea visible="true"/>  <Title text="Executive Department Only" visible="true" horizontalAlignment="CENTER"/>  <LocalGridData colCount="{count(.//G_2)}" rowCount="1">   <RowLabels>    <Label>SALARY</Label>   </RowLabels>   <ColLabels>    <xsl:for-each select=".//G_2">     <Label><xsl:value-of select="EMP_NAME"/></Label>    </xsl:for-each>   </ColLabels>   <DataValues>    <RowData>     <xsl:for-each select=".//G_2">      <Cell><xsl:value-of select="SALARY"/></Cell>     </xsl:for-each>    </RowData>   </DataValues>  </LocalGridData> </Graph> Note the emboldened text. Its currently grabbing all values in the G_2 level of the data. We can use an XPATH expression to filter the data to the set we want to see. In my case I want to only see the employees that are in the Executive department. My  data is structured thus:   <DATA_DS>     <G_1>         <DEPARTMENT_NAME>Accounting</DEPARTMENT_NAME>         <G_2>             <MANAGER>Higgins</MANAGER>             <EMPLOYEE_ID>206</EMPLOYEE_ID>             <HIRE_DATE>2002-06-07T00:00:00.000-04:00</HIRE_DATE>             <SALARY>8300</SALARY>             <JOB_TITLE>Public Accountant</JOB_TITLE>             <PARAS>11000</PARAS>             <EMP_NAME>William Gietz</EMP_NAME>         </G_2> So the XPATH expression Im going to use to limit the data to the Executive department would be .//G_2[../DEPARTMENT_NAME='Executive'] Note the ../ moves the parser up the XML tree to be able to test the DEPARTMENT_NAME value. I added this XPATH expression to the three instances that need it ColCount, ColLabels and RowData. Its simple enough to do. Testing your XPATH expression is easier to do using a table of data. Please note, as soon as you make changes to the chart code. Going back to the Builder tab, you'll find that everything is grayed out. I recommend you make all the changes you can via the chart dialog before updating the code. Online Template Implementing the filter is much simpler, there is a dialog box to help you out. Add you chart and fill out the various data points you want to show. then hit the Filter item in the ribbon above the chart. That will pop the filter dialog box where you can then add a filter to the chart.   You can add multiple filters if needed and of course you can use the Manage Filters button to re-open and edit the filters. Pretty straightforward stuff!

    Read the article

  • apache2 how to trace caller of SIGTERM

    - by art vanderlay
    I have a dex x64 on a virtualbox win7pro host. My apache2 will stop responding after a page request or other activity such as upload via ftp. The php.cgi becomes non responsive and a restart is required any help tracking down the culprit sending the SIGTERM would be much appreciated. thx Art my apache2.conf has <IfModule mpm_prefork_module> ServerLimit 1024 StartServers 10 MinSpareServers 10 MaxSpareServers 20 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> ` From the apache2 log I have [Wed Jun 20 05:07:01 2012] [notice] caught SIGTERM, shutting down [Wed Jun 20 05:07:03 2012] [notice] FastCGI: process manager initialized (pid 4369) [Wed Jun 20 05:07:03 2012] [notice] Apache/2.2.16 (Debian) mod_fastcgi/2.4.6 PHP/5.3.3-7+squeeze13 with Suhosin-Patch mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations and from the accounting output with lastcomm php.cgi www-data __ 0.13 secs Wed Jun 20 04:49 lastcomm root pts/2 0.10 secs Wed Jun 20 04:49 php.cgi www-data __ 0.18 secs Wed Jun 20 04:49 php.cgi www-data __ 0.18 secs Wed Jun 20 04:47 apache2 root pts/1 0.02 secs Wed Jun 20 04:46 tput root pts/1 0.00 secs Wed Jun 20 04:46 apache2 F root pts/1 0.00 secs Wed Jun 20 04:46 apache2ctl root pts/1 0.00 secs Wed Jun 20 04:46 apache2 S root pts/1 0.77 secs Wed Jun 20 04:46 rm root pts/1 0.01 secs Wed Jun 20 04:46 install root pts/1 0.01 secs Wed Jun 20 04:46 mkdir root pts/1 0.00 secs Wed Jun 20 04:46 apache2ctl F root pts/1 0.00 secs Wed Jun 20 04:46 sleep root pts/1 0.00 secs Wed Jun 20 04:46 apache2 SF root __ 0.54 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.14 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.07 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.06 secs Wed Jun 20 04:36 apache2 SF www-data __ 0.07 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.11 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.02 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.04 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.06 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.08 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.03 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.02 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.01 secs Wed Jun 20 04:34 grep root pts/1 0.00 secs Wed Jun 20 04:46 apache2ctl root pts/1 0.02 secs Wed Jun 20 04:46 apache2 root pts/1 0.24 secs Wed Jun 20 04:46 apache2 SF www-data __ 0.00 secs Wed Jun 20 04:34 apache2ctl F root pts/1 0.00 secs Wed Jun 20 04:46 apache2ctl root pts/1 0.00 secs Wed Jun 20 04:46 apache2 root pts/1 0.22 secs Wed Jun 20 04:46 apache2ctl F root pts/1 0.01 secs Wed Jun 20 04:46 apache2 F root pts/1 0.00 secs Wed Jun 20 04:46 grep root pts/1 0.00 secs Wed Jun 20 04:46 tr root pts/1 0.00 secs Wed Jun 20 04:46 pidof S root pts/1 0.11 secs Wed Jun 20 04:46 cat root pts/1 0.00 secs Wed Jun 20 04:46 apache2 F root pts/1 0.00 secs Wed Jun 20 04:46 grep root pts/1 0.00 secs Wed Jun 20 04:46 tr root pts/1 0.00 secs Wed Jun 20 04:46 pidof S root pts/1 0.05 secs Wed Jun 20 04:46 cat root pts/1 0.01 secs Wed Jun 20 04:46 apache2 F root pts/1 0.00 secs Wed Jun 20 04:46 apache2ctl root pts/1 0.00 secs Wed Jun 20 04:46 apache2 root pts/1 0.34 secs Wed Jun 20 04:46 apache2ctl F root pts/1 0.00 secs Wed Jun 20 04:46 apache2 F root pts/1 0.00 secs Wed Jun 20 04:46 apache2 F root pts/1 0.00 secs Wed Jun 20 04:46 smbd SF root __ 0.25 secs Wed Jun 20 04:46 php.cgi www-data __ 0.14 secs Wed Jun 20 04:45 php.cgi www-data __ 0.19 secs Wed Jun 20 04:42 cron SF root __ 0.02 secs Wed Jun 20 04:39 sh S root __ 0.00 secs Wed Jun 20 04:39 find root __ 0.00 secs Wed Jun 20 04:39 maxlifetime root __ 0.02 secs Wed Jun 20 04:39 php5 root __ 0.13 secs Wed Jun 20 04:39 which root __ 0.00 secs Wed Jun 20 04:39 exim4 S root __ 0.01 secs Wed Jun 20 04:37 php.cgi www-data __ 0.04 secs Wed Jun 20 04:36 php.cgi www-data __ 0.12 secs Wed Jun 20 04:35 php.cgi www-data __ 0.11 secs Wed Jun 20 04:35 php.cgi www-data __ 0.14 secs Wed Jun 20 04:34 lastcomm root pts/2 0.09 secs Wed Jun 20 04:34 apache2 root pts/1 0.02 secs Wed Jun 20 04:34 tput root pts/1 0.00 secs Wed Jun 20 04:34 apache2 F root pts/1 0.00 secs Wed Jun 20 04:34 apache2ctl root pts/1 0.00 secs Wed Jun 20 04:34 apache2 S root pts/1 0.54 secs Wed Jun 20 04:34 rm root pts/1 0.00 secs Wed Jun 20 04:34 install root pts/1 0.00 secs Wed Jun 20 04:34 mkdir root pts/1 0.00 secs Wed Jun 20 04:34 apache2ctl F root pts/1 0.00 secs Wed Jun 20 04:34 sleep root pts/1 0.00 secs Wed Jun 20 04:34 apache2 SF root __ 0.80 secs Wed Jun 20 03:58 sleep root pts/1 0.00 secs Wed Jun 20 04:34 apache2 SF www-data __ 0.26 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.12 secs Wed Jun 20 03:59 apache2 SF www-data __ 0.13 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.13 secs Wed Jun 20 03:59 apache2 SF www-data __ 0.15 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.18 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.07 secs Wed Jun 20 04:21 apache2 SF www-data __ 0.18 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.17 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.30 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.09 secs Wed Jun 20 03:58 apache2 SF www-data __ 0.02 secs Wed Jun 20 04:13

    Read the article

  • CodePlex Daily Summary for Monday, March 15, 2010

    CodePlex Daily Summary for Monday, March 15, 2010New ProjectsAT Accounts: AT Accounts helps developers to intergrate accounting functionality in their applications. It has both the WPF userinterface and SilverlightChild page list(for dnn4/5): A free module which can display sub pages list for a selected tab. It is template based and support options like Recursive/Child tab prefix/link...dashCommerce: dashCommerce is the leading ASP.NET e-commerce platform.Fire Utilities: My Development Utiltites and base classes: New Zealand Bank Account ValidatorFlyCatch (Bugtracking System): A simple webbased Bugtracking System.fracback: Fractal feedback concepts, based on video feedbackftc3650: code for ftc 3650Google AJAX Search Services for jQuery: This plug-in encapsulates part of the Google AJAX Search API to streamline the process of Google Search integration.Little Black Book DB: This is the Database for the following Projects: SQL Azure PHP Connection SQL Azure Ruby Connection SQL Azure Python Connection SQL Azure .NE...MediaCommMVC: MediaCommMVC is a community platform focusing on photos, videos and discussions. It's based on ASP.NET MVC and uses (fluent) nhibernate, jquery an...Miracle OS: The Miracle OS is an OS from Fox. We work on it, but it isn't ready. Do you want help us? Please send a mail to victor@fox.fi.stMultiwfn: (1)Plotting various graph(filled color/contour/relief map...) (2)Generate Cube file (3)Manipulate & analyze wavefunction Supportting lots of proper...MySpace DataRelay: Data Relay is the foundation of MySpace's middle tier. At its heart, it is a messaging system for relaying information both between clients and ser...NinjaCMS: Ninja CMS is an asp.net based content management system which provides a designer friendly, developer friendly interface to work with. It's flexibl...open gaze and mouse analyzer: Ogama allows recording and analyzing eye- and mouse-tracking data from slideshow eyetracking experiments in parallel. It´s developed in C#.NET and ...Özkasoft.Net | E-Commerce: Özkasoft's E-Commerce ProjectProfiCV: Profi CVpyTarget: Implement a powerful iscsi target in python, and easily use under most popular systems. It also includes the following features: multi-target, mult...SharePoint Platform Extensions: SharePoint Platform Extensions by Espora. Sorting Algorithm Visualization: Sorting Algorithm Visualization Displays Bead Sort, Binary Tree Sort, Bubble Sort, Bucket Sort, Cocktail Sort, Counting Sort, Gnome Sort, In Place ...Specify: A framework for creating executable specifications in .NET. Spell Corrector: A spell corrector that uses Bayes algorithm and BK (Burkhard-Keller) tree.SQL Azure Ruby Connection: This is a demo to show how to connect to SQL Azure with Ruby on Rails.uManage - AD Self-Service Portal: uManage is an Active Directory Self-Service Portal as well as Help Desk web application designed for use on intranet systems. It allows users to u...Winforms Rounded Group Box Control: Rounded Group Box - A Grouping control with Rounded Corners, Gradients, and Drop ShadowWizard Engine: Host application agnostic wizard engine platform, that allows you to fluently define complex conditional flows and provides means for execution of ...WS-Transfer based File Upload: WS-Transer based upload of large files in multiple partsXAMLStylePad: XAMLStylePad - is a simple in use styles and templates XAML-editor. It designed for comfortable coding in XAML with real-time preview result on aut...Your Twitt Engine: Ovo je aplikacija za sve ljude koji su na svom radnom mjestu pod prismotrom poslodavca ili sefa, koji kontroliraju njihov monitor. Tako uz ovu apl...New ReleasesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AniBroker Plug-in SDK v0.0.5: Removed dependency on .NET 4.0, now it works fine with .NET 2.0BeerMath.net: 0.1: Version 0.1Initial set of calculations supported: IBUs Color ABV/ABWChild page list(for dnn4/5): Child Page List 2.6: Source code is also include in module package.dashCommerce: dashCommerce Releases: You can download both Source and WebReady packages at http://www.dashcommerce.org. If you wish to submit patches, then use the Source Code tab her...ExcelDna: ExcelDna Version 0.23: ExcelDna Version 0.23 2010/03/14 - Packing and other features This release adds a number of features to ExcelDna: Add ExplicitExports attribute to ...Family Tree Analyzer: Version 1.0.7.1: Version 1.0.7.0 Update Census form to show family totals Fix England and Wales Lost Cousins reports to be England OR Wales Problems with Gedcom in...Foursquare BlogEngine Widget: foursquare widget for BlogEngine.NET Version 0.2: To see the changes which have been made, visit http://philippkueng.ch/post/Foursquare-BlogEngineNET-Widget-Version-02.aspx For installation instruc...GLB Virtual Player Builder: 0.4.0 Official Archetypes Release: Updated for new archetypes. The builder still includes the old player formats, and you can still import your old players' builds. Please PM me an...Home Access Plus+: v3.1.4.0: Version 3.1.3.1 Release Change Log: Added Breadcrumbs to My Computer File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/images/arro...Little Black Book DB: Little Black Book R1: This is the first release of the Little black book presentation I presented at Confoo. I decided to package the Database along with the Windows Az...mite.net - .NET API for mite: Version 1.2.1: Added Support for budget type Modified TimerMapper to return timers Fixed Encoding issue in xml conversionMultiwfn: multiwfn1.0: multiwfn1.0Multiwfn: multiwfn1.0_source: multiwfn1.0_sourceMultiwfn: multiwfn1.1: multiwfn1.1Multiwfn: multiwfn1.1_source: multiwfn1.1_sourceMultiwfn: multiwfn1.2: 1.2 2010-FEB-9 *加入了对10f型轨道的支持。 *新支持非限制性Post-HF波函数用以计算自旋密度。 *新增加直接读入高斯03/09的fch文件的支持,可以观看NBO轨道,详见readme实例4.10。 *绘制平面图时允许通过输入三个点坐标定义平面,允许自定义平面的原点与平移向...Multiwfn: multiwfn1.2_source: Include all the file that needed by compilation in CVF6.5PowerShell Community Extensions: 2.0 Beta 2: Release NotesThis is a pretty close to final release. We have eliminated all of the names that ran afound of the module loading mechanism which me...pyTarget: pyTarget.binary-for-windows-x86.rar: pyTarget.binary-for-windows-x86.rarpyTarget: pyTarget.src.tar.bz2: pyTarget.src.tar.bz2RedBulb for XNA Framework: RedBulbConsole (Console, Menu and TrackHUD Sample): http://bayimg.com/image/jalhmaacd.jpgScrum Sprint Monitor: 1.0.0.45262 (.NET 4.0 RC): Tested against TFS 2010 RC. For the .NET 3.5 SP1 platform, use the .NET 3.5 SP1 download. What is new in this release? Major performance increase ...sELedit: sELedit v1.1: Removed: Clone and Delete Button Added: Context Menu to Item List Added: Clone and Delete button to Context Menu Added: Export / Import Item ...Sorting Algorithm Visualization: Beta 1: Sorting Algorithm VisualizationSpecify: Version 1.0: Version 1.0Spell Corrector: Spell Corrector 0.1: A basic version that supports basic functionality.Spell Corrector: Spell Corrector 0.1 Source Code: Source code of version 0.1Spiral Architecture Driven Development (SADD): SADD v.0.9: Pre-final release with the NEW materials now all in English ! The Final release is coming soon. After guest column for SADD publication in MS Ar...Spiral Architecture Driven Development (SADD) for Russian: SADD v.0.9: Pre-final release with the NEW materials now all in English ! The Final release is coming soon. After guest column for SADD publication in MS Ar...SQL Azure Ruby Connection: Little Black Book Ruby R1: This is the Ruby Demo that I demostrated at Confoo. Special Thanks to Tony Thompson for putting this demo together. To check out Tony's Portfolio ...The Scrum Factory: The Scrum Factory Server - V1a: This is the newest version of the server. Some minor bugs from version v1 were fixed, and some slighted changed were made some database views.twNowplaying: twNowplaying 1.0.0.4: Please note that the user has to press the Twitter logo to log in the first time the application is started.uManage - AD Self-Service Portal: uManage - v1.0 (.NET 4.0 RC): Initial Release of uManage. NOTE: Designed for ASP.NET and .NET 4.0 RC ONLY! This is the initial release of uManage and covers the first phase of ...Virtu: Virtu 0.8: Source Requirements.NET Framework 3.5 with Service Pack 1 Visual Studio 2008 with Service Pack 1, or Visual C# 2008 Express Edition with Service Pa...Visual Studio DSite: Speech Synthesizer (Text to Speech) in Visual C++: A very simple text to speech program written in visual c 2008.White Tiger: 0.0.4.0: *now you can disable the file security checks *winforms aplications created to manage tablesWinforms Rounded Group Box Control: Release 1.0: To use this control simply add the class to your project and compile it. It will then show up in the projects components section in the toolbox. ...WS-Transfer based File Upload: 0.5: Implements the binary file transfer mechanism onlyXsltDb - DotNetNuke XSLT module: 01.00.89: Super modules configuration names. 16767 - Fixed more bug fixes...Yakiimo3D: DirectX11 Rheinhard Tonemapping Source and Binary: DirectX11 Rheinhard tonemapping source and binary.Your Twitt Engine: test: Slobodno probajte sa vasim twitter korisničkim računomMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitASP.NET Ajax LibraryWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsLINQ to TwitterRawrN2 CMSBlogEngine.NETpatterns & practices – Enterprise LibrarySharePoint Team-MailerjQuery Library for SharePoint Web ServicesCaliburn: An Application Framework for WPF and SilverlightFarseer Physics EngineCalcium: A modular application toolset leveraging Prism

    Read the article

  • Juniper SSG-5 subinterface vlan routing to the internet

    - by catfish
    I'm unable to get a brand new Juniper SSG-5 with latest 6.3.0r05 firmware routing to the internet from a subinterface I created on bgroup0 setup as vlan2 (bgroup0.1 on "wifi" zone). When connected on the default vlan it gets on the internet just fine. When I switch to vlan2 I'm unable to get to the internet. I am able to get the correct ip address (10.150.0.0/24) from dhcp, able to get to the juniper management page, etc but nothing past the firewall, can't ping 4.2.2.2 or the internet gateway. Even setting up logging on the wifi-to-untrust policy and it does shows the attempts (it's it's timeouts). 172.31.16.0/24 is the untrusted lan, it's already nat'ed but works fine for testing. Can ping this ip from the default vlan but not from vlan2 192.168.1.0/24 is the trusted main lan 10.150.0.0/24 is the wifi isolated lan on vlan2 The idea is to setup an AP with lan and guest access (AP supports multiple ssid's on different vlans). I know I can setup the juniper to use different ports for the wifi lan and use their procurve switch to do the vlan separation, but I never used vlan'ing on a Juniper firewall and I would like to try it out this way. Here is the complete config file: unset key protection enable set clock timezone -5 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "xxxxxxxxxxxxxxxx" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone id 100 "Wifi" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst unset zone "Wifi" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Untrust" set interface "bgroup0" zone "Trust" set interface "bgroup0.1" tag 2 zone "Wifi" set interface "bgroup1" zone "DMZ" set interface bgroup0 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup0 port ethernet0/5 set interface bgroup0 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 172.31.16.243/24 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup0.1 ip 10.150.0.1/24 set interface bgroup0.1 nat set interface bgroup0.1 mtu 1500 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup0.1 ip manageable set interface ethernet0/0 manage ping set interface ethernet0/1 manage ping set interface bgroup0.1 manage ping set interface bgroup0.1 manage telnet set interface bgroup0.1 manage web unset interface bgroup1 manage ping set interface bgroup0 dhcp server service set interface bgroup0.1 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup0.1 dhcp server enable set interface bgroup0 dhcp server option gateway 192.168.1.1 set interface bgroup0 dhcp server option netmask 255.255.255.0 set interface bgroup0 dhcp server option dns1 8.8.8.8 set interface bgroup0.1 dhcp server option lease 1440 set interface bgroup0.1 dhcp server option gateway 10.150.0.1 set interface bgroup0.1 dhcp server option netmask 255.255.255.0 set interface bgroup0.1 dhcp server option dns1 8.8.8.8 set interface bgroup0 dhcp server ip 192.168.1.33 to 192.168.1.126 set interface bgroup0.1 dhcp server ip 10.150.0.50 to 10.150.0.100 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup0.1 dhcp server config next-server-ip set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow no-tcp-seq-check set flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Wifi" to "Untrust" "Any" "Any" "ANY" permit log set policy id 2 exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set snmpv3 local-engine id "0162122009006149" set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route set route 0.0.0.0/0 interface ethernet0/0 gateway 172.31.16.1 exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

< Previous Page | 12 13 14 15 16 17 18  | Next Page >