Search Results

Search found 8785 results on 352 pages for 'bug reporting'.

Page 98/352 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Upcoming Carbon Tax in South Africa

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In 2012, the South Africa National Treasury announced the plan to impose a carbon tax to cut carbon emissions that are blamed for climate change. South Africa is ranked among the top 20 countries measured by absolute carbon dioxide emissions, with emissions per capita in the region of 10 metric tons per annum and over 90% of South Africa's energy produced by burning fossil fuels. The top 40 largest companies in the country are responsible for 207 million tons of carbon dioxide, directly emitting 20 percent of South Africa’s carbon output. The legislation, originally scheduled to be implemented from January 2015 to 31 December 2019, is now delayed to January 2016. It will levy a carbon tax of R120 (US$11) per ton of CO2, rising then by 10 percent a year until 2020, while all sectors bar electricity will be able to claim additional relief of at least 10 percent. The South African treasury proposed a 60 percent tax-free threshold on emissions for all sectors, including electricity, petroleum, iron, steel and aluminum. Oracle Environmental Accounting and Reporting (EA&R) supports these needs and guarantees consistency across organizations in how data is collected, retained, controlled, consolidated and used in calculating and reporting emissions inventory. EA&R also enables companies to develop an enterprise-wide data view that includes all 5 of the key sustainability categories: carbon emissions, energy, water, materials and waste. Thanks to its native integration with Oracle E-Business Suite and JD Edwards EnterpriseOne ERP Financials and Inventory Systems and the capability of capturing environmental data across business silos, Oracle Environmental Accounting and Reporting is uniquely positioned to support a strategic approach to carbon management that drives business value. Sources: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} African Utility Week BDlive Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • Using ReportViewer 9 control in VS 2010

    - by Fermin
    Hi, I am writing an ASP.NET app that uses a SQL Server 2005 with SSRS setup. I want to use the ReportViewer control but I get an error when using ReportViewer 10 because it needs SSRS 2008. How can I use ReportViewer 9 within my application. I've added a reference to the Microsoft.ReportViewer.WebForms.dll version 9 and removed the reference to version 10. My markup is as follows: <%@ Register Assembly="Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" Namespace="Microsoft.Reporting.WebForms" TagPrefix="rsweb" %> <!-- standard markup --> <rsweb:ReportViewer ID="ReportViewer1" runat="server"></rsweb:ReportViewer> but when I try to run this I get the following error: CS0433: The type 'Microsoft.Reporting.WebForms.ReportViewer' exists in both 'c:\WINDOWS\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\10.0.0.0__b03f5f7f11d50a3a\Microsoft.ReportViewer.WebForms.dll' and 'c:\WINDOWS\assembly\GAC_MSIL\Microsoft.ReportViewer.WebForms\9.0.0.0__b03f5f7f11d50a3a\Microsoft.ReportViewer.WebForms.dll' What have I missed!? Update: When trying to use the ReportViewer 10 I get the following error: "Remote report processing requires Microsoft SQL Server 2008 Reporting Services or later."

    Read the article

  • "Unable To Load Client Print Control" - SSRS Printing problems again

    - by mamorgan1
    Please forgive me as my head is spinning. I have tried so many solutions to this issue, that I'm almost not sure where I am at this point. At this point in time I have these issues in my Production, Test, and Dev environments. For simplicity sake, I will just try to get it working in Dev first. Here is my setup: Database/Reporting Server (Same server): Windows Server 2003 SP2 SQL Server 2005 SP3 Development Box: Windows 7 Visual Studio 2008 SP1 SQL Server 2008 SP1 (not being used in this case, but wanted to include it in case it is relative) Internet Explorer 8 Details: * I have a custom ASP.NET application that is using ReportViewer to access reports on my Database/Reporting Server. * I am able to connect directly to Report Manager and print with no trouble. * When I view source on the page with ReportViewer, it says I'm am using version 9.0.30729.4402 . * The classid of the rsclientprint.dll that keeps getting installed to my c:\windows\downloaded program files directory is {41861299-EAB2-4DCC-986C-802AE12AC499}. * I have tried taking the rsclientprint.cab file from my Database/Reporting Server and installing it directly to my Development Box and had no success. I made sure to unregister the previously installed dll first. I feel like I have read as many solutions as I can, and so I turn to you for some assistance. Please let me know if I can provide further details that would be helpful. Thanks

    Read the article

  • Overriding windows authentication for a .NET application

    - by JoshReedSchramm
    I have a .NET application where the homepage (default.aspx) should be accessible by anyone. There is also a reporting page (reporting.aspx) that I want to secure via windows authentication and only allow access to a particular set of AD groups. Right now the way my web.config is setup it is securing the reporting page but on the home page it prompts the user for login credentials. If they hit esc they can continue to the page though so it isnt actually securing it. I need to prevent it from prompting the user. How do i need to setup my config. Here is what i have now - <system.web> <authentication mode="Windows" /> <identity impersonate="true" /> <authorization> <allow roles="BUILTIN\Administrators, DomainName\Manager" /> <deny users="?" /> </authorization> ...MORE STUFF... </system.web> <location path="default.aspx"> <system.web> <identity impersonate="false" /> <authorization> <allow users="*"/> </authorization> </system.web> </location>

    Read the article

  • Object model design choice

    - by spinon
    I am currently working on a ASP.NET MVC reporting application using C#. This is a redesign from a PHP application that was just initially thrown together and is now starting to gain some more traction. SowWe are in the process of reworking the backend to have a more OO approach. One of the descisions I am currently wrestling with is how to structure the domain objects. Since 95% of the site is readonly I am not sure if the typical approaches are practical. Should I create domain objects for the primary pieces of the application (ticket, assignment, assignee) and then create static methods off of these areas to pull the reporting data? Or should I just skip that part and create the chart data classes and have some get method off of these classes? It's not a real big application and currenlty I am the only one developing on it. But I feel torn as to which approach. I feel that the first one is the better choice but maybe overkill given that the majority of uses is for aggregate reporting. Anybody have some good insight on why I should go one way or another?

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • Fogbugz Duplicate Cases

    - by LeeHull
    I am using FogBugz free hosting to manage my project bugs, I also have several customers I create custom software for, been using FogBugz to keep everything organized. Question I have is, there are times where they send me an email with a bug, so I report it in my system and they create it as well, instead of having 2 cases of same bug, would like to merge or link them together, rather not just delete the duplicate. Is there a way to link them together, maybe like a cross reference or even merge them together?

    Read the article

  • hpricot segfault?

    - by AP257
    Any idea why hpricot might segfault on this page? trial_url = 'http://www.controlled-trials.com/ISRCTN56071145/' doc = Hpricot(open(trial_url)) produces: /Users/ap257/.gem/ruby/1.8/gems/hpricot-0.8.2/lib/hpricot/parse.rb:33: [BUG] Segmentation fault ruby 1.8.7 (2009-06-08 patchlevel 173) [universal-darwin10.0] Abort trap Please could anyone advise on how I could get around this, or whether it's a bug in hpricot that I should report somewhere? Thanks!

    Read the article

  • MapKit/Location Manager crashes app when unloading view

    - by AppGolfer
    I has a bug where my application crashed "EXC_BAD_ACCESS" when I hit the back key on my navigation bar and the view unloaded that had a MapKit (mapView) and used the Location Manager. Tried for days to fix the bug and finally came up with a fix for anyone that comes across this problem: Add this code to your dealloc (void)dealloc { mapView.delegate = nil; locationManager.delegate = nil; [mapView release]; [locationManager release];

    Read the article

  • Hide an access violation on another application

    - by Fernando
    Hi, I have an application that sometimes causes an access violation on exit. This is quite unpredictable and all attempts to locate the bug have been unsuccesful so far. The bug is harmless, as no data is lost, so I was thinking if it might be possible to just hide it. Is it possible to have another app launch the buggy one and catch the Access Violation exception if it occurs? If yes, how? Thanks in advance!

    Read the article

  • integrating two systems through email

    - by Martin
    I want to integrate our bug tracker system and our Support system through emails. The bug tracker can kick out an email on every change to bugs/features. I want to download those emails, parse them and create a formatted email that the Support system can understand (ie the subject could be "Issue #4128 fixed"). What is the simplest way to accomplish this using C++ or C#?

    Read the article

  • JQuery UI popup elements not positioning correctly

    - by Okku
    I am using both JQuery UI Dialog and JQuery UI autocomplete both have the same erroneous behavior when they popup, the position is always 0,0! I have tried some different position arguments when popping up the dialog but non seems to help. Any clues? Is this a bug in the position calculation in JQuery? Or is this some css bug? Versions are 1.4.2 and 1.8.0

    Read the article

  • Is there a workaround for the broken closeOnEscape in jQuery UI Dialog

    - by Darryl Hein
    It looks like there is a bug in jQuery UI Dialog where there closeOnEscape doesn't work properly, such that escape will still close the dialog. One possible solution is to unbind the keydown on the overlay, but this doesn't seem to work. Is there another solution that works? Here is the link for bug and fix for 1.6, but 1.5.3 is still broken: http://dev.jqueryui.com/ticket/3253

    Read the article

  • What is CTabFolderPageManager?

    - by Alexey Romanov
    This Eclipse bug mentions something called CTabFolderPageManager, which seems like it could be useful for me. However, searching for CTabFolderPageManager doesn't give any results. Is it a future feature for SWT (given that the bug report is from 2007, this would be surprising)? Or did I just fail at searching for it?

    Read the article

  • upgrading apk file in android market

    - by Aswan
    hi folks, i sell the application into android market with the version 1. i upgrade the application with version 2.in that version 2 i found one bug after upgrading.now i resolved that bug is it possible to upgrade application ith version2 Thanks in advance Aswan

    Read the article

  • Pessimistic locking is not working with Query API

    - by Reddy
    List esns=session.createQuery("from Pool e where e.status=:status "+ "order by uuid asc") .setString("status", "AVAILABLE") .setMaxResults(n) .setLockMode("e", LockMode.PESSIMISTIC_WRITE) .list(); I have the above query written, however it is not generating for update query and simultaneous updates are happening. I am using 3.5.2 version and it has a bug in Criteria API, is the same bug present in query API as well or I am doing something wrong?

    Read the article

  • Migrate MySQL database to Sql Server

    - by RPK
    I recently encountered a problem in my production database due to MySQL bug. The bug is in the TimeStamp column. To avoid any inconvenience, I want to migrate the database to either Sql Server. Is there any FREE tool to easily and reliably migrate data and table structures?

    Read the article

  • how to increase the content of the data in itemrenderer of the list?

    - by maniohile
    hi, i'm binding data into list which comes from the backend ,i want to increase the height of each content,i have used height ='some value',it's get increased ,i want to increase the size of the content with the value of 32 but it didn't get increased but i gave the itemrenderer (hbox)height as 100%..is it any inbuilt bug on flex or bug on me..thank's in advance...

    Read the article

  • UITableView crashes when adding 2 objects to an empty store, with sections (NSRangeException)

    - by likejy
    UITableView crashes in endUpdate, called by the Managed Object Context "save" method, when: 1- The Core-Data Store is empty 2- The Fetched Result Controller is configured to show sections 3- Two managed objects (or more) have been added to the store When I've searched this situation in google. I've found exactly matched error in this post. It looks like a bug.Is there a any solution to avoid this bug ?

    Read the article

  • Virtual Disk Degraded

    - by TheD
    There is a physical DC with a Raid 1 Mirror, 2 Physical Disks, 500GB each. Dell Server Administrator is installed on the DC, and is reporting both physical disks are fine, online, in a good state etc. On a PERC S300 Raid Controller: Physical Disk 0:0 Physical Disk 0:1 However at the same time it's reporting that a virtual disk is degraded, what exactly does this mean? The virtual disk indicates it's State is in a Raid 1 Layout. Device Name: Windows Disk 0 If my understanding is correct then the Virtual Disk, when you drill down into Dell OpenManage should have both physical disks as members, as it is a mirror? Is this correct? However, when I drill down into the Virtual Disk, it only displays Physical Disk 0:0 included in Virtual Disk 1. I'm very new to server side/raid management etc. just while our server techy is away! Thanks!

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >