Search Results

Search found 2558 results on 103 pages for 'significant digits'.

Page 72/103 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Oracle HCM Cloud Customer Q&A with WAXIE Sanitary Supply

    - by HCM-Oracle
    At this year’s Oracle HCM User Group (OHUG) Global conference, we had the opportunity to sit down with Oracle HCM Cloud customers for a short Q&A. We got to hear about what brought them to the OHUG conference, some of the benefits they are receiving from their Oracle HCM Cloud solutions, and advice they would give other businesses looking to move to the cloud.  Below is a discussion we had with Melissa Halverson, Benefits & HRIS Manager at WAXIE Sanitary Supply.  Q: What made you attend the OHUG Global Conference this year? Halverson: The biggest reason is networking. It allows me to connect with others in the Oracle HCM Cloud community. I was able to speak at the HCM Cloud SIG (Special Interest Group) on the first day and share my experiences as well as hear the experiences of other Oracle HCM Cloud users. It also allows me to get face-time with key people within Oracle.  Q: What Oracle HCM solutions are you currently using? Halverson: Global HR, Benefits, Workforce Compensation, and Performance Management. Q: Do you plan to invest further in Oracle HCM? Halverson: Yes, we are interested in Time and Labor. We would also like to get Recruiting at some point in the future. Q: What would you say is the most significant benefit you’ve realized from your use of Oracle HCM solutions? Halverson: First and foremost would be process improvement. Before we had Oracle HCM Cloud we relied on a paper process where something as simple as an employee address change required changes to be made manually in 9 different systems. Obviously that was extremely inefficient, but also increased the likelihood of errors being made.  The other huge benefit we have seen was in making information visible to the people that need it. Prior to implementing Oracle HCM Cloud, it was very difficult for anyone to access and make use of the information in our systems. Now, we can provide this information to those who need it to make better decisions.  Q: What advice would you give an organization looking to move their HR systems to the cloud? Halverson: One thing I think many organizations don't spend enough time doing is thoroughly vetting their implementation partner. I believe you should be vetting your implementation partner as much as you did the system itself. Also, manpower is so important. Involve as large a team as possible because you don’t want to get stuck having too few bodies to help out. And set realistic time frames. Biting off more than you can chew will inevitably result in failure. Having a phased approach is always best rather than trying to do everything at once. Thanks for the tips Melissa. Enjoy the rest of the conference!

    Read the article

  • It's All In The Cloud

    - by Natalia Rachelson
    People turned out in droves for Steve Miranda's Apps Cloud General Session. Steve, as engaging as ever, covered our Apps strategy in the cloud and reinforced that Oracle has a complete set of cloud services including: •    Human Capital Management•    Talent Management•    Sales and Marketing•    Customer Service and Support•    Financial Management•    Procurement, Sourcing, and Inventory•    Project Portfolio Management•    Governance, Risk, and Compliance... all delivered on top of the Social, Platform, and Common Infrastructure.Steve talked about Fusion being the centerpiece of our Cloud Services. The fact that Fusion is 100 percent standards based is a big, big deal! In addition, our ERP Cloud Service is the most complete cloud service on the market. And email marketing is dead -- social marketing is where the action is. It's also where Oracle is investing heavily from a Sales & Marketing Cloud perspective. Steve covered the strategic acquisitions Oracle has made to enhance our organic Cloud offering. Specifically, Oracle bought RightNow to make our Customer Service and Support Cloud service complete. We also bought Taleo to add Recruiting and Learning capabilities to our Talent Management Cloud. Steve talked about our customers and how they are benefiting from the use of a variety of our Cloud Services. Red Robin is driving lower labor and food costs with Oracle ERP Cloud Service. He used Elizabeth Arden as the profile customer for HCM and Talent Management Service, UBS for HCM and Talent Management Service, and Brocade for Talent Management. All these customers are benefiting from a comprehensive and fully integrated HR platform that aligns compensation with performance and enhances workforce motivation and retention. At the same time, Hitachi Data Systems is using Oracle Taleo Performance Management Cloud to recruit the right competencies, pinpoint areas of improvement, and develop and monitor employee goals to support the global account organization. KLM and Overstock.com are gaining the benefits of Oracle's Customer Service and Support Service from RightNow by better engaging and serving customer needs online and through call centers. And last but not least, Graco and Key Energy are leveraging mobility features and sales forecasting and territory management capabilities within the Oracle Sales and Marketing Service. They expect to gain better visibility to sales information and drive more efficient sales campaigns and empower their sales force with data they need to make sales. Overall, Oracle Apps Cloud Services are enjoying a significant momentum in the marketplace. Steve projected an air of confidence and enthusiasm highlighting Oracle's latest successes with Cloud services.

    Read the article

  • Is MongoDB a good choice or not for my application?

    - by shubham
    I have a Reporting application which stores the reports in xml format as recieved from source (XML schema is not defined, it can be any format) and those reports contain some keys and values. Like jobid, setid be keys for 1 type of report and userid, groupId for another type of report etc. The type of keys that can be referred from the document is determined by the namespaces used in the xml doc. These keys are stored on the basis of namespace used in the xml document. For e.g. If a tag in xml fragment uses namespace= "myspace1", then I have keys A and B for myspace1 stored in another table. It will fetch those keys from that table for this namespace, look for their values in xml doc and store it in another table along with the pointer to this xml document (Id of a record storing complete xml document in a cell). Use cases: When the user comes and queries for that key and value, I return the document or a set of documents that are having those key/value pairs. When the user comes and queries for a certain key and provide a name for xslt (pre stored), I fetch the set of documents fulfilling that criteria and convert that xml to html with the specified xslt. When the user comes and asks for a particular fragment of a doc then it can fetch a subset from a particular document also. When the user comes and queries for top x values of a certain key, I return the set of documents that are having top 10 values of that key. I am using DB2 database for its support of xml along with relational capabilities. That makes easier for me to run xpath expressions and fetch values of keys and also aggregate a set of documents fullfilling a criteria, all on the database side. Problems: DB2 stores XML doc of upto 2GB in size. Retrieval is very slow. If some thing involves many documents, then it takes significant time for things to show up in browser, and the user has to wait. Can MongoDb help in this case, as it is document oriented? can I do xml related xpath queries and document transformations on db side? Or is it ok to use both in such a case?

    Read the article

  • TechEd 2012: Windows 8 And Metro

    - by Tim Murphy
    Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note.  Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code.  I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer.  Developers do real work, IT Pros just play with toys (just kidding). The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets.  Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features.  Even though they have been available on other platforms before I think Microsoft really got them right. Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key.  My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW!  I also can’t wait to get rid of dual booting just to run Hyper-V images when developing. The morning continued with a session on Metro XAML development with Tim Heuer.  While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012.  Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start. Moving on to Metro he introduced the nugget that WinRT is Async everywhere.  How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform.  Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions. Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default.  This is a contrast to adding namespace and assemblies one after another as we normally do. This was such a power packed session that I can’t detail it all here so here is the teaser list. New icons in VS2012 for extension methods Emulator/simulator testing features for gestures Portable class libraries XAML no longer managed code And so much more …   del.icio.us Tags: Windows 8,Metro,Tim Heuer,XAML,Widows Phone,Hyper-V,Antoine LeBlond,TechEd,TechEd 2012,Visual Studio 2012,Visual Studio

    Read the article

  • The State of the Internet -- Retail Edition

    - by David Dorf
    Over at Business Insider, there's a great presentation on the State of the Internet done in the Mary Meeker style.  Its 138 slides so I took the liberty of condensing it down to the 15 slides that directly apply to the retail industry.  However, I strongly recommend looking at the entire deck when you have time.  And while you're at it, Business Insider just launched a retail portal that's dedicated to retail industry content.  Please check it out as well.  My take-aways are below after the slide show. &amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;quot;&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;gt; [Source: Business Insider] Here are a few things I took away from the statistics: Facebook and Twitter are in their infancy.  While all retailers should have social programs, search is still the driver and therefore should receive the lions share of investment.  Facebook referrals are up 92% year-over-year, but Google still does 80% of the referrals. E-commerce continues to grow at breakneck speed, but in-store commerce is still king. Stores are not showrooms yet.  And social commerce pure-plays like Gilt and Groupon are tiny but worthy of some attention. There are more smartphones than PCs on the internet, and the disparity will continue to grow. PC growth will be flat and Tablet use will continue to grow. Mobile accounts for 12% of all internet traffic. A quarter of smartphone sales come from China, so anyone with a presence there better have a strong mobile strategy. 38% of people have used their smartphone to make a purchase, and many use their smartphones inside stores.  Smartphones are a critical consumer tool for shopping. Mobile is starting to drive significant traffic to e-commerce sites, especially tablets.  Tablet strategies are crucial for retailers. Mobile payments from the likes of Paypal and Square are growing quickly.  It will be interesting to see how NFC plays in this area. Mobile operating systems are losing market share to iOS and Android.  I wonder in Microsoft can finally make a dent? The internet is being dominated by mobile devices, and retailers had better have a strong mobile strategy to meet consumer demand.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • JS closures - Passing a function to a child, how should the shared object be accessed

    - by slicedtoad
    I have a design and am wondering what the appropriate way to access variables is. I'll demonstrate with this example since I can't seem to describe it better than the title. Term is an object representing a bunch of time data (a repeating duration of time defined by a bunch of attributes) Term has some print functionality but does not implement the print functions itself, rather they are passed in as anonymous functions by the parent. This would be similar to how shaders can be passed to a renderer rather than defined by the renderer. A container (let's call it Box) has a Schedule object that can understand and use Term objects. Box creates Term objects and passes them to Schedule as required. Box also defines the print functions stored in Term. A print function usually takes an argument and uses it to return a string based on that argument and Term's internal data. Sometime the print function could also use data stored in Schedule, though. I'm calling this data shared. So, the question is, what is the best way to access this shared data. I have a lot of options since JS has closures and I'm not familiar enough to know if I should be using them or avoiding them in this case. Options: Create a local "reference" (term used lightly) to the shared data (data is not a primitive) when defining the print function by accessing the shared data through Schedule from Box. Example: var schedule = function(){ var sched = Schedule(); var t1 = Term( function(x){ // Term.print() return (x + sched.data).format(); }); }; Bind it to Term explicitly. (Pass it in Term's constructor or something). Or bind it in Sched after Box passes it. And then access it as an attribute of Term. Pass it in at the same time x is passed to the print function, (from sched). This is the most familiar way for my but it doesn't feel right given JS's closure ability. Do something weird like bind some context and arguments to print. I'm hoping the correct answer isn't purely subjective. If it is, then I guess the answer is just "do whatever works". But I feel like there are some significant differences between the approaches that could have a large impact when stretched beyond my small example.

    Read the article

  • Meet This Year's Most Impressive WebCenter Customer Projects

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects Oracle OpenWorld Session – Tuesday Oct. 2, 2012: Moscone West, Room 3001 at 11:45AM This year – the Oracle Excellence awards had an amazing number of nominations. Each group at Oracle had a challenge to select the most innovative and game-changing nominations for their winners. The Fusion Middleware Innovation Awards, jointly sponsored by Oracle, OAUG, QUEST, ODTUG, IOUG, AUSOUG and UKOUG, honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across eight distinct categories: Oracle Exalogic Cloud Application Foundation Service Integration (SOA) and BPM WebCenter Identity Management Data Integration Application Development Framework and Fusion Development Business Analytics (BI, EPM and Exalytics)  The nominations included the pioneers in our customer base using these solutions in innovative ways to achieve significant business value. Tune in this afternoon for a listing of the WebCenter winners.

    Read the article

  • JDK bug migration milestone: JIRA now the system of record

    - by darcy
    I'm pleased to announce the OpenJDK bug database migration project has reached a significant milestone: the JDK has switched from the legacy Sun "bugtraq" system to a new internal JIRA instance as the system of record for our bug tracking. This completes the initial phase of the previously described plan of getting OpenJDK onto an externally visible and writable bug tracker. The identities contained in the current system include recognized OpenJDK contributors. The bug migration effort to date has been sizable in multiple dimensions. There are around 140,000 distinct issues imported into the JDK project of the JIRA instance, nearly 165,000 if backport issues to track multiple-release information are included. Separately, the Code Tools OpenJDK project has its own JIRA project populated with several thousands existing bugs. Once the OpenJDK JIRA instance is externalized, approved OpenJDK projects will be able to request the creation of a JIRA project for issue tracking. There are many differences in the schema used to model bugs between the legacy bug system and the schema for the new JIRA projects. We've favored simplifications to the existing system where possible and, after much discussion, we've settled on five main states for the OpenJDK JIRA projects: New Open In progress Resolved Closed The Open and In-progress states can have a substate Understanding field set to track whether the issues has its "Cause Known" or "Fix understood". In the closed state, a Verification field can indicate whether a fix has been verified, unverified, or if the fix has failed. At the moment, there will be very little externally visible difference between JIRA for OpenJDK and the legacy system it replaces. One difference is that bug numbers for newly filed issues in the JIRA JDK project will be 8000000 and above. If you are working with JDK Hg repositories, update any local copies of jcheck to the latest version which recognizes this expanded bug range. (The bug numbers of existing issues have been preserved on the import into JIRA). Relatively soon, we plan for the pages published on bugs.sun.com to be generated from information in JIRA rather than in the legacy system. When this occurs, there will be some differences in the page display and the terminology used will be revised to reflect JIRA usage, such as referring to the "component/subcomponent" of an issue rather than its "category". The exact timing of this transition will be announced when it is known. We don't currently have a firm timeline for externalization of the JIRA system. Updates will be provided as they become available. However, that is unlikely to happen before JavaOne next week!

    Read the article

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • Interesting Topics in Comp. Sci. for New Students?

    - by SoulBeaver
    I hope this is the right forum to ask this question. Last friday I was in a discussion with my professors about the students' lack of motivation and interest in the field of Computer Science. All of the students are enrolled, but through questionnaires and other questions that my professor posed it was revealed that over 90% of all enrolled students are just in it for the reward of getting a job sometime in the future (since it's a growing field with high job potential) I asked my professor for the permission to take over the first couple of lectures and try and motivate, interest and inspire students for the field of Computer Science and programming in particular (this is the Intro to Programming course). This request was granted and I now have a week to come up with a lecture topic for my professor's five groups. My main goal isn't to teach, I just want to get students to be as interested in the field as I am. I want to show them what's possible, what awesome magical things have been done in the field, the future we are heading towards using programming and Comp. Sci. Therefore, I would like to pose this question: I have a few topics, materials and sample projects that I would like to talk about: -- Grace Hopper (It is my hope to interest the female programmers in the class. There are never more than two or three per group and they, more than males, are prone to jumping ship and abandoning Comp. Sci.) -- The Singularity Institute -- Alan Turing -- Robotics -- Programming not as a chore or a must, but the idea that we are, at our core, the nexus to which anything anybody does in the digital world is connected to. We are the problem solvers; we assemble all the parts together and we are the ones that, essentially, make the vision a reality. -- Give them an idea for a programming project which, through the help of the professor, could be significant to every student (I want students to not only feel interested in the topic, but they should feel important, that what they do here makes a difference) Do you have interesting topics worthy of discussion, something I can tell the students which they can get interested about? How would you approach the lecture? If you had 90 minutes worth of time to try and get students interested in the project, what would you do?

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • How do I take responsibility for my code when colleague makes unnecessary improvements without notice?

    - by Jesslyn
    One of my teammates is a jack of all trades in our IT shop and I respect his insight. However, sometimes he reviews my code (he's second in command to our team leader, so that's expected) without a heads up. So sometimes he reviews my changes before they complete the end goal and makes changes right away... and has even broken my work once. Other times, he has made unnecessary improvements to some of my code that is 3+ months old. This annoys me for a few reasons: I am not always given a chance to fix my mistakes He has not taken the time to ask me what I was trying to accomplish when he is confused, which could affect his testing or changes I don't always think his code is readable Deadlines are not an issue and his current workload doesn't require any work in my projects other than reviewing my code changes. Anyways, I have told him in the past to please keep me posted if he sees something in my work that he wants to change so that I could take ownership of my code (maybe I should have said "shortcomings") and he's not been responsive. I fear that I may come off as aggressive when I ask him to explain his changes to me. He's just a quiet person who keeps to himself, but his actions continue. I don't want to banish him from making code changes (not like I could), because we are a team--but I want to do my part to help our team. Added clarifications: We share 1 development branch. I do not wait until all my changes complete a single task because I risk losing some significant work--so I make sure my changes build and do not break anything. My concern is that my teammate doesn't explain the reason or purpose behind his changes. I don't think he should need my blessing, but if we disagree on an approach I thought it would be best to discuss the pros and cons and make a decision once we both understand what is going on. I have not discussed this with our team lead yet as I would prefer to resolve personal disagreements without getting management involved unless it is necessary. Since my concern seemed more of personal issue than a threat to our work, I chose to not bother the team lead. I am working on code review process ideas--to help promote the benefits of more organized code reviews without making it all about my pet peeves.

    Read the article

  • How to interview a natural scientist for a dev position?

    - by Silas
    I already did some interviews for my company, mostly computer scientists for dev positions but also some testers and project managers. Now I have to fill a vacancy in our research group within the R&D department (side note: “research” means that we try to solve problems in our professional domain/market niche using software in research projects together with universities, other companies, research centres and end user organisations. It’s not computer science research; we’re not going to solve the P=NP problem). Now we invited a guy holding an MSc in chemistry (with a lot of physics in his CV, too), who never had any computer science lesson. I already talked with him about half an hour at a local university’s career days and there’s no doubt the guy is smart. Also his marks are excellent and he graduated with distinction. For his BSc he needed to teach himself programming in Mathematica and told me believably that he liked programming a lot. Also he solved some physical chemistry problem that I probably don’t understand using his own software, implemented in Mathematica, for his MSc thesis. It includes a GUI and a notable size of 8,000 LoC. He seems to be very attracted by what we’re doing in our research group and to be honest it’s quite difficult for an SME like us to get good people. I also am very interested in hiring him since he could assist me in writing project proposals, reports, doing presentations and so on. He would probably fit to our team, too. The only question left is: How can I check if he will get the programming skills he needs to do software implementation in our projects since this will be a significant part of the job? Of course I will ask him what it is, that is fascinating him about programming. I’ll also ask how he proceeded to write his natural science software and how he structured it. I’ll ask about how he managed to obtain the skills and information about software development he needed. But is there something more I could ask? Something more concrete perhaps? Should I ask him to explain his Mathematica solution? To be clear: I’m not looking for knowledge in a particular language or technology stack. We’re a .NET shop in product development but I want to have a free choice for our research projects. So I’m interested in the meta-competence being able to learn whatever is actually needed. I hope this question is answerable and not open-ended since I really like to know if there is a default way to check for the ability to get further programming skills on the job. If something is not clear to you please give me some comments and let me improve my question.

    Read the article

  • Survey: Your Plans for Adopting New Firefox Releases?

    - by Steven Chan (Oracle Development)
    Mozilla is committing to releasing new Firefox versions every six weeks.  Mozilla released Firefox 5 this week.  With this release, Mozilla states that Firefox 4 is End-of-Life and will not receive any additional security updates.  In a comment thread posted on to a Mike Kaply's blog article discussing these new Firefox policies, Asa Dotzler from Mozilla stated: ... Enterprise has never been (and I’ll argue, shouldn’t be) a focus of ours. Until we run out of people who don’t have sysadmins and enterprise deployment teams looking out for them, I can’t imagine why we’d focus at all on the kinds of environments you care so much about.  In a later comment, he added: ... A minute spent making a corporate user happy can better be spent making many regular users happy. I’d much rather Mozilla spending its limited resources looking out for the billions of users that don’t have enterprise support systems already taking care of them. Asa then confirmed that every new Firefox release will put the previous one into End-of-Life: As for John’s concern, “By the time I validate Firefox 5, what guarantee would I have that Firefox 5 won’t go EOL when Firefox 6 is released?” He has the opposite of guarantees that won’t happen. He has my promise that it will happen. Firefox 6 will be the EOL of Firefox 5. And Firefox 7 will be the EOL for Firefox 6.  He added: “You’re basically saying you don’t care about corporations.” Yes, I’m basically saying that I don’t care about making Firefox enterprise friendly. Kev Needham, Channel Manager at Mozilla later stated to PC Mag: The Web and Web browsers continue to evolve rapidly. Mozilla's focus is on providing users with the best Web experience possible, and Firefox needs to evolve at the pace the Web's users and developers expect. By releasing small, focused updates more often, we are able to deliver improved security and stability even as we introduce new features, which is better for our users, and for the Web.We recognize that this shift may not be compatible with a large organization's IT Policy and understand that it is challenging to organizations that have effort-intensive certification polices. However, our development process is geared toward delivering products that support the Web as it is today, while innovating and building future Web capabilities. Tying Firefox product development to an organizational process we do not control would make it difficult for us to continue to innovate for our users and the betterment of the Web.  Your feedback needed for E-Business Suite certifications  Mozilla's new support policy has significant implications for enterprise users of Firefox with Oracle E-Business Suite.  We are reviewing the implications for our certification and support policies for Firefox now.  It would be very helpful if you could let me know about your organisation's plans for Firefox in light of this new information.  Please feel free to drop me a private email, or post a comment here if that's appropriate. 

    Read the article

  • Seven Accounting Changes for 2010

    - by Theresa Hickman
    I read a very interesting article called Seven Accounting Changes That Will Affect Your 2010 Annual Report from SmartPros that nicely summarized how 2010 annual financial statements will be impacted.  Here’s a Reader’s Digest version of the changes: 1.  Changes to revenue recognition if you sell bundled products with multiple deliverables: Old Rule: You needed to objectively establish the “fair value” of each bundled item. So if you sold a dishwasher plus installation and could not establish the fair value of the installation, you might have to delay recognizing revenue of the dishwasher days or weeks later until it was installed. New Rule (ASU 2009-13): “Objective” proof of each service or good is no longer required; you can simply estimate the selling price of the installation and warranty. So the dishwasher vendor can recognize the dishwasher revenue immediately at the point of sale without waiting a few weeks for the installation. Then they can recognize the estimated value of the installation after it is complete. 2.  Changes to revenue recognition for devices with embedded software: Old Rule: Hardware devices with embedded software, such as the iPhone, had to follow stringent software revrec rules. This forced Apple to recognize iPhone revenues over two years, the period of time that software updates were provided. New Rule (ASU 2009-14): Software revrec rules no longer apply to these devices with embedded software; these devices can now follow ASU 2009-13. This allows vendors, such as Apple, to recognize revenue sooner. 3.  Fair value disclosures: Companies (both public and private) now need to spend extra time gathering, summarizing, and disclosing information about items measured at fair value, such as significant transfers in and out of Level 1(quoted market price), Level 2 (valuation based on observable markets), and Level 3 (valuations based on internal information). 4.  Consolidation of variable interest entities (a.k.a special purpose entities): Consolidation rules for variable interest entities now require a qualitative, not quantitative, analysis to determine the primary beneficiary. Instead of simply looking at the percentage of voting interests, the primary beneficiary could have less than the majority interests as long as it has the power to direct the activities and absorb any losses.  5.  XBRL: Starting in June 2011, all U.S. public companies are required to file financial statements to the SEC using XBRL. Note: Oracle supports XBRL reporting. 6.  Non-GAAP financial disclosures: Companies that report non-GAAP measures of performance, such as EBITDA in SEC filings, have more flexibility.  The new interpretations can be found here: http://www.sec.gov/divisions/corpfin/guidance/nongaapinterp.htm.  7.  Loss contingencies disclosures: Companies should expect additional scrutiny of their loss disclosures, such as those from litigation losses, in their annual financial statements. The SEC wants more disclosures about loss contingencies sooner instead of after the cases are settled.

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • How to properly do weapon cool-down reload timer in multi-player laggy environment?

    - by John Murdoch
    I want to handle weapon cool-down timers in a fair and predictable way on both client on server. Situation: Multiple clients connected to server, which is doing hit detection / physics Clients have different latency for their connections to server ranging from 50ms to 500ms. They want to shoot weapons with fairly long reload/cool-down times (assume exactly 10 seconds) It is important that they get to shoot these weapons close to the cool-down time, as if some clients manage to shoot sooner than others (either because they are "early" or the others are "late") they gain a significant advantage. I need to show time remaining for reload on player's screen Clients can have clocks which are flat-out wrong (bad timezones, etc.) What I'm currently doing to deal with latency: Client collects server side state in a history, tagged with server timestamps Client assesses his time difference with server time: behindServerTimeNs = (behindServerTimeNs + (System.nanoTime() - receivedState.getServerTimeNs())) / 2 Client renders all state received from server 200 ms behind from his current time, adjusted by what he believes his time difference with server time is (whether due to wrong clocks, or lag). If he has server states on both sides of that calculated time, he (mostly LERP) interpolates between them, if not then he (LERP) extrapolates. No other client-side prediction of movement, e.g., to make his vehicle seem more responsive is done so far, but maybe will be added later So how do I properly add weapon reload timers? My first idea would be for the server to send each player the time when his reload will be done with each world state update, the client then adjusts it for the clock difference and thus can estimate when the reload will be finished in client-time (perhaps considering also for latency that the shoot message from client to server will take as well?), and if the user mashes the "shoot" button after (or perhaps even slightly before?) that time, send the shoot event. The server would get the shoot event and consider the time shot was made as the server time when it was received. It would then discard it if it is nowhere near reload time, execute it immediately if it is past reload time, and hold it for a few physics cycles until reload is done in case if it was received a bit early. It does all seem a bit convoluted, and I'm wondering whether it will work (e.g., whether it won't be the case that players with lower ping get better reload rates), and whether there are more elegant solutions to this problem.

    Read the article

  • What to do when you inherit an unmaintainable codebase?

    - by GordonM
    I'm currently working at a company with 2 other PHP developers aside from me, and 1 junior developer. The senior developer who originally built the system we're all working on has resigned and will only be here for a matter of weeks. The other developer, who is the only other guy who knows anything about the system, is unhappy here and is looking for a new job. I'm very real danger of being left behind as the only experienced developer on this codebase. Since I've joined this company I've tried to push for better coding standards, project documentation, etc and I do think I've made some headway, but the vast majority of the code is simply unmaintainable and uncommented. A lot of this has to do with the need to get things done fast at points in the project before I joined, but now the technical debt is enormous, even with the two developers who do understand the system on board. Without them, it will simply be impossible to do anything with it. The senior developer is working on trying to at least comment all his code before he leaves but I think the codebase is simply too vast to properly document in the remaining time. Besides, when he does comment it still doesn't make things as clear as it could. If the system was better organized and documented I could probably start refactoring it incrementally, but the whole thing is so tightly coupled that it's very difficult to make any changes in one module without having unintended knock-on effects in other modules. Naturally, there's no unit tests either, and I honestly don't think this codebase could possibly be unit tested anyway given how it's implemented. There also never seems to be enough time to get things done even with 3 developers and 1 junior developer. With one developer and one junior, neither of which had significant input into the early design of the system, I don't see how we could possibly get anything done with keeping the current system working, implementing new features as needed and developing a replacement for the current codebase that is better organized. Is there an approach I can take to cope with this situation, or should I be getting my own CV in order as well at this point? If it was just me and the junior designer who would be left I'd go for the latter option almost without question. However, there's a team of front-end developers and content managers as well, and I'm worried what would become of them if I left and put them in a position where there would be no developers at all. The department might just be closed down altogether under such circumstances, and then I'd have their unemployment on my conscience as well!

    Read the article

  • Alert: It is No Longer 1982, So Why is CRM Still There?

    - by Mike Stiles
    Hot off the heels of Oracle’s recent LinkedIn integration announcement and Oracle Marketing Cloud Interact 2014, the Oracle Social Cloud is preparing for another big event, the CRM Evolution conference and exhibition in NYC. The role of social channels in customer engagement continues to grow, and social customer engagement will be a significant theme at the conference. According to Paul Greenberg, CRM Evolution Conference Chair, author, and Managing Principal at The 56 Group, social channels have become so pervasive that there is no longer a clear reason to make a distinction between “social CRM” and traditional CRM systems. Why not? Because social is a communication hub every bit as vital and used as the phone or email. What makes social different is that if you think of it as a phone, it’s a party line. That means customer interactions are far from secret, and social connections are listening in by the hundreds, hearing whether their friend is having a positive or negative experience with your brand. According to a Mention.com study, 76% of brand mentions are neutral, neither positive nor negative. These mentions fail to get much notice. So think what that means about the remaining 24% of mentions. They’re standing out, because a verdict, about you, is being rendered in them, usually with emotion. Suddenly, where the R of CRM has been lip service and somewhat expendable in the past, “relationship” takes on new meaning, seriousness, and urgency. Remarkably, legions of brands still approach CRM as if it were 1982. Today, brands must provide customer experiences the customer actually likes (how dare they expect such things). They must intimately know not only their customers, but each customer, because technology now makes personalized experiences possible. That’s why the Oracle Social Cloud has been so mission-oriented about seamlessly integrating social with sales, marketing and customer service interactions so the enterprise can have an actionable 360-degree view of the customer. It’s the key to that customer-centricity we hear so much about these days. If you’re attending CRM Evolution, Chris Moody, Director of Product Marketing for the Oracle Marketing Cloud, will show you how unified customer experiences and enhanced customer centricity will help you attract and keep ideal customers and brand advocates (“The Pursuit of Customer-Centricity” Aug 19 at 2:45p ET) And Meg Bear, Group Vice President for the Oracle Social Cloud, will sit on a panel talking about “terms of engagement” and the ways tech can now enhance your interactions with customers (Aug 20 at 10a ET). If you can’t be there, we’ll be doing our live-tweeting thing from the @oraclesocial handle, so make sure you’re a faithful follower. You’ll notice NOBODY is writing about the wisdom of “company-centricity.” Now is the time to bring your customer relationship management into the socially connected age. @mikestilesPhoto: Sue Pizarro, freeimages.com

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Determining cause of random latency/loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of css/js files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well. Any help or advice is much appreciated.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >