Search Results

Search found 870 results on 35 pages for 'allocation'.

Page 26/35 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Fast, cross-platform timer?

    - by dsimcha
    I'm looking to improve the D garbage collector by adding some heuristics to avoid garbage collection runs that are unlikely to result in significant freeing. One heuristic I'd like to add is that GC should not be run more than once per X amount of time (maybe once per second or so). To do this I need a timer with the following properties: It must be able to grab the correct time with minimal overhead. Calling core.stdc.time takes an amount of time roughly equivalent to a small memory allocation, so it's not a good option. Ideally, should be cross-platform (both OS and CPU), for maintenance simplicity. Super high resolution isn't terribly important. If the times are accurate to maybe 1/4 of a second, that's good enough. Must work in a multithreaded/multi-CPU context. The x86 rdtsc instruction won't work.

    Read the article

  • malloc unable to assign memory + doesnt warn

    - by sraddhaj
    char *str=NULL; strsave(s,str,n+1); printf("%s",str-n); when I gdb debug this code I find that the str value is 0x0 which is null and also that my code is not catching this failed memory allocation , it doesnt execute str==NULL perror code ...Any idea void strsave(char *s,char *str,int n) { str=(char *)malloc(sizeof(char)* n); if(str==NULL) perror("failed to allocate memory"); while(*s) { *str++=*s++; } *str='\0'; }

    Read the article

  • question related to Iphone autorelease usage

    - by user524331
    Could someone help me please understand how allocation and memory management is done and handled in following scenario. i am giving a Psuedo code example and question thats troubling me is inline below: interface first { NSDecimalNumber *number1; } implementation ..... -(void) dealloc { [number1 release]; [super dealloc]; } ================================= interface second { NSDecimalNumber *number2; } implementation second ..... - (First*) check { First *firstObject = [[[First alloc] init] autorelease]; number1 = [[NSDecimalNumber alloc] initWithInteger:0]; **// do i need to autorelease number1 as well?** return firstObject; }

    Read the article

  • Retrieving Gtk::Widget's relative position: get_allocate() doesn't work

    - by a-v
    I need to retrieve the position of a Gtk::Widget relative to its parent, a Gtk::Table. Most sources (e.g. http://library.gnome.org/devel/gtk-faq/stable/x642.html) say that one needs to call Gtk::Widget::get_allocation(). However, the returned Gtk::Allocation object always contains x = -1, y = -1, width = 1, height = 1. I have to note that this happens before the Gtk::Table object is actually exposed and rendered. A call to show_all_children() or check_resize(), which I would expect to recalculate child widget geometry, doesn't help. What am I doing wrong? Thanks in advance.

    Read the article

  • How to get notified of modification in the memory in Linux

    - by Song Yuan
    In a userspace program in Linux, I get a piece of memory via allocation from the heap, then the pointer is distributed to a lot of other components running in other threads to use. I would like to get notified when the said piece of memory is modified. I can of course develop a custom userspace solution for other components to use when they try to modify the memory. The problem in my case is that these are legacy components and they can write to memory in many occasions. So I'm wondering whether there is a similar API like inotify (get notified when file is changed) or other approaches in order to get notified when a piece of memory is changed. I considered using mmap and inotify, which obviously won't work if the changes are not flushed. Any suggestions are appreciated :-)

    Read the article

  • C++ How do you set an array of pointers to null in an initialiser list like way?

    - by boredjoe
    I am aware you cannot use an initialiser list for an array. However I have heard of ways that you can set an array of pointers to NULL in a way that is similar to an initialiser list. I am not certain how this is done. I have heard that a pointer is set to NULL by default, though I do not know if this is guaranteed/ in the C++ standard. I am also not sure if initialising through the new operator compared to normal allocation can make a difference too.

    Read the article

  • C++ STL: Array vs Vector: Raw element accessing performance

    - by oh boy
    I'm building an interpreter and as I'm aiming for raw speed this time, every clock cycle matters for me in this (raw) case. Do you have any experience or information what of the both is faster: Vector or Array? All what matters is the speed I can access an element (opcode receiving), I don't care about inserting, allocation, sorting, etc. I'm going to lean myself out of the window now and say: Arrays are at least a bit faster than vectors in terms of accessing an element i. It seems really logical for me. With vectors you have all those security and controlling overhead which doesn't exist for arrays. (Why) Am I wrong? No, I can't ignore the performance difference - even if it is so small - I have already optimized and minimized every other part of the VM which executes the opcodes :)

    Read the article

  • How to limit memory of a OS X program? ulimit -v neither -m are working

    - by hectorpal
    My programs run out of memory like half of the time I run them. Under Linux I can set a hard limit to the available memory using ulimit -v mem-in-kbytes. Actually, I use ulimit -S -v mem-in-kbytes, so I get a proper memory allocation problem in the program and I can abort. But... ulimit is not working in OSX 10.6. I've tried with -s and -m options, and they are not working. In 2008 there was some discussion about the same issue in MacRumors, but nobody proposed a good alternative. The should be a way a program can learn it's spending too much memory, or setting a limit through the OS.

    Read the article

  • Difference between local and instance variables in ruby

    - by fflyer05
    I am working on a script that creates several fairly complex nested hash datastructures and then iterates through them conditionally creating database records. This is a standalone script using active record. After several minutes of running I noticed a significant lag in server responsiveness and discovered that the script, while being set to be nice +19, was enjoying a steady %85 - %90 total server memory. In this case I am using instance variables simply for readability. It helps knowing what is going to be re-used outside of the loop vs. what won't. Is there a reason to not use instance variables when they are not needed? Are there differences in memory allocation and management between local and instance variables? Would it help setting @variable = nil when its no longer needed?

    Read the article

  • Using temporary arrays to cut down on code - inefficient?

    - by tommaisey
    I'm new to c++ (and SO) so sorry if this is obvious. I've started using temporary arrays in my code to cut down on repetition and to make it easier to do the same thing to multiple objects. So instead of: MyObject obj1, obj2, obj3, obj4; obj1.doSomming(arg); obj2.doSomming(arg); obj3.doSomming(arg); obj4.doSomming(arg); I'm doing: MyObject obj1, obj2, obj3, obj4; MyObject* objs[] = {&obj1, &obj2, &obj3, &obj4}; for (int i = 0; i !=4; ++i) objs[i]->doSomming(arg); Is this detrimental to performance? Like, does it cause unnecessary memory allocation? Is it good practice? Thanks.

    Read the article

  • JRockit R28/JRockit Mission Control 4.0 is out!

    - by Marcus Hirt
    The next major release of JRockit is finally out! Here are some highlights: Includes the all new JRockit Flight Recorder – supersedes the old JRockit Runtime Analyser. The new flight recorder is inspired by the “black box” in airplanes. It uses a highly efficient recording engine and thread local buffers to capture data about the runtime and the application running in the JVM. It can be configured to always be on, so that whenever anything “interesting” happens, data can be dumped for some time back. Think of it as your own personal profiling time machine. Automatic shortest path calculation in Memleak – no longer any need for running around in circles when trying to find your way back to a thread root from an instance. Memleak can now show class loader related information and split graphs on a per class loader basis. More easily configured JMX agent – default port for both RMI Registry and RMI Server can be configured, and is by default the same, allowing easier configuration of firewalls. Up to 64 GB (was 4GB) compressed references. Per thread allocation profiling in the Management Console. Native Memory Tracking – it is now possible to track native memory allocations with very high resolution. The information can either be accessed using JRCMD, or the dedicated Native Memory Tracking experimental plug-in for the Management Console (alas only available for the upcoming 4.0.1 release). JRockit can now produce heap dumps in HPROF format. Cooperative suspension – JRockit is no longer using system signals for stopping threads, which could lead to hangs if signals were lost or blocked (for example bad NFS shares). Now threads check periodically to see if they are suspended. VPAT/Section 508 compliant JRMC – greatly improved keyboard navigation and screen reader support. See New and Noteworthy for more information. JRockit Mission Control 4.0.0 can be downloaded from here: http://www.oracle.com/technology/software/products/jrockit/index.html <shameless ad> There is even a book to go with JRMC 4.0.0/JRockit R28! http://www.packtpub.com/oracle-jrockit-the-definitive-guide/book/ </shameless ad>

    Read the article

  • Multi-Threaded Application vs. Single Threaded Application

    Why would we use a multi threaded application vs. a single threaded application? First we must define multithreading. Multithreading is a feature of an operating system that allows programs to run subcomponents or threads in parallel. Typically most applications only need to use one thread because they do not perform time consuming tasks. The use of multiple threads allows an application to distribute long running tasks so that they can be executed in parallel. This gives the user the perceived appearance that the application is working faster due to the fact that while one thread is waiting on an IO process the remaining tasks can make use of the available CPU. The allows working threads to execute in tandem so that they can be competed sooner. Multithreading Benefits Improved responsiveness — Users usually report improved responsiveness compared to single thread applications. Faster applications — Multiple threads can lead to improved application performance. Prioritization — Threads can be assigned a priority which would allow higher priority tasks to take precedence over lower priority tasks. Single Threading Benefits Programming and debugging —These activities are easier compared to multithreaded applications due to the reduced complexity Less Overhead — Threads add overhead to an application When developing multi-threaded applications, the following must be considered. Deadlocks occur when two threads hold a monitor that the other one requires. In essence each task is blocking the other and both tasks are waiting for the other monitor to be released. This forces an application to hang or deadlock. Resource allocation is used to prevent deadlocks because the system determines if approving the resource request will render the system in an unsafe state. An unsafe state could result in a deadlock. The system only approves requests that will lead to safe states. Thread Synchronization is used when multiple threads use the same instance of an object. The threads accessing the object can then be locked and then synchronized so that each task can interact with the static object on at a time.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Bad Data is Really the Monster

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Bad Data is really the monster – is an article written by Bikram Sinha who I borrowed the title and the inspiration for this blog. Sinha writes: “Bad or missing data makes application systems fail when they process order-level data. One of the key items in the supply-chain industry is the product (aka SKU). Therefore, it becomes the most important data element to tie up multiple merchandising processes including purchase order allocation, stock movement, shipping notifications, and inventory details… Bad data can cause huge operational failures and cost millions of dollars in terms of time, resources, and money to clean up and validate data across multiple participating systems. Yes bad data really is the monster, so what do we do about it? Close our eyes and hope it stays in the closet? We’ve tacked this problem for some years now at Oracle, and with our latest introduction of Oracle Enterprise Data Quality along with our integrated Oracle Master Data Management products provides a complete, best-in-class answer to the bad data monster. What’s unique about it? Oracle Enterprise Data Quality also combines powerful data profiling, cleansing, matching, and monitoring capabilities while offering unparalleled ease of use. What makes it unique is that it has dedicated capabilities to address the distinct challenges of both customer and product data quality – [different monsters have different needs of course!]. And the ability to profile data is just as important to identify and measure poor quality data and identify new rules and requirements. Included are semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured. Finally all of the data quality components are integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM. Want to learn more? On Tuesday Nov 15th, I invite you to listen to our webcast on Reduce ERP consolidation risks with Oracle Master Data Management I’ll be joined by our partner iGate Patni and be talking about one specific way to deal with the bad data monster specifically around ERP consolidation. Look forward to seeing you there!

    Read the article

  • BizTalk 2010 Certification Exam

    - by Paul Petrov
    I took a shot at new (to me) certification exam for BizTalk 2010. I was able to pass it without any preparation just based on the experience. That does not mean this exam is a very simple one. Comparing to previous (2006 R2) it covers some new areas (like WCF) and has some demanding questions and situation to think about. But the most challenging factor is broad feature coverage. Overall, the impression that if BizTalk continues to grow in scope it’s better to create separate exams for core functionality and extended features (like EDI, RFID, LOB adapters) because it’s really hard to cover vast array of BizTalk capabilities. As far as required knowledge and questions allocation I think Microsoft description is on target. There were definitely more questions on deployment, configuration and administration aspects comparing to previous exam. WCF and WCF based adapters now play big role and this topic was covered well too. Extended functionality is claimed at 13% of the exam, I felt there were plenty of RFID questions but not many EDI, that’s why I thought it’d be useful to split exam into two to cover all of them equally. BRE is still there and good, cause it’s usually not very known/loved feature of the package. At the and, for those who plan to get certified, my advice would be to know all those areas of BizTalk for guaranteed passing: messaging and orchestrations, core adapters, routing, patterns; development of all artifacts and orchestrations; debugging and exceptions handling; packaging, deployment, tracking and administration; WCF bindings and adapters; BAM, BRE, RFID, EDI, etc. You may get by not knowing one smaller non-essential part (like I did with RFID, for example). In such case you better know all other areas very well to cover for the weak spot. If there more than one whiteouts in the knowledge it’s good idea to study and prepare: MSDN, blogs, virtual labs and good VM to play with can help when experience is not enough. So best wishes and good skill to you in passing this certification!

    Read the article

  • ANTS Memory Profiler 7.0

    - by Sam Abraham
    In the next few lines, I would like to briefly review ANTS Memory Profiler 7.0.  I was honored to be extended the opportunity to review this valuable tool as part of the GeeksWithBlogs influencers Program, a quarterly award providing its recipients access to valuable tools and enabling them with an opportunity to provide a brief write-up reviewing the complimentary tools they receive.   Typical Usage   ANTS Memory Profiler 7.0 is very intuitive and easy to use for any user be it novice or expert. A simple yet comprehensive menu screen enables the selection of the appropriate program type to profile as well as the executable or site for this program.   A typical use case starts with establishing a baseline memory snapshot, which tells us the initial memory cost used by the program under normal or low activity conditions. We would then take a second snapshot after the program has performed an activity which we want to investigate for memory leaks. We can then compare the initial baseline snapshot against the snapshot when the program has completed processing the activity in question to study anomalies in memory that did not get freed-up after the program has completed its performed function. The following are some screenshots outlining the selection of the program to profile (an executable for this demonstration’s purposes).   Figure 1 - Getting Started   Figure 2 - Selecting an Application to Profile     Features and Options   Right after the second snapshot is generated, Memory Profiler gives us immediate access to information on memory fragmentation, size differences between snapshots, unmanaged memory allocation and statistics on the largest classes taking up un-freed memory space.   We would also have the option to itemize objects held in memory grouped by object types within which we can study the instances allocated of each type. Filtering options enable us to quickly narrow object instances we are interested in.   Figure 3 - Easily accessible Execution Memory Information   Figure 4 - Class List   Figure 5 - Instance List   Figure 6-  Retention Graph for a Particular Instance   Conclusion I greatly enjoyed the opportunity to evaluate ANTS Memory Profiler 7.0. The tool's intuitive User Interface design and easily accessible menu options enabled me to quickly identify problem areas where memory was left unfreed in my code.     Tutorials and References  FInd out more About ANTS Memory Profiler 7.0 http://www.red-gate.com/supportcenter/Product?p=ANTS Memory Profiler   Checkout what other reviewers of this valuable tool have already shared: http://geekswithblogs.net/BlackRabbitCoder/archive/2011/03/10/ants-memory-profiler-7.0.aspx http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • The Use-Case Driven Approach to Change Management

    - by Lauren Clark
    In the third entry of the series on OUM and PMI’s Pulse of the Profession, we took a look at the continued importance of change management and risk management. The topic of change management and OUM’s use-case driven approach has come up in few recent conversations. So I thought I would jot down a few thoughts on how the use-case driven approach aids a project team in managing the project’s scope. The use-case model is one of several tools in OUM that is used to establish and manage the project's scope.  Because a use-case model can be understood by both business and IT project team members, it can serve as a bridge for ongoing collaboration as well as a visual diagram that encapsulates all agreed-upon functionality. This makes it a vital artifact in identifying changes to the project’s scope. Here are some of the primary benefits of using the use-case model as part of the effort for establishing and managing project scope: The use-case model quickly communicates scope in a straightforward manner. All project stakeholders can have a common foundation for the decisions regarding architecture and design and how they relate to the project's objectives. Once agreed upon, the model can be put under change control and any updates to the model can then be quickly identified as potentially affecting the project’s scope.  Changes requested or discovered later in the project can be analyzed objectively for their impact on project's budget, resources and schedule. A modular foundation for the design of the software solution can be established in Elaboration.  This permits work to be divided up effectively and executed in so that the most important and riskiest use-cases can be tackled early in the project. The use-case model helps the team make informed decisions about implementation priorities, which allows effective allocation of limited project resources.  This is very helpful in not only managing scope, but in doing iterative and incremental planning which relies heavily on the ability to identify project priorities. Bottom line is that the use-case model gives the project team solid understanding of scope early in the project.  Combine this understanding with effective project management and communication and you have an effective tool for reducing the risk of overruns in budget and/or time due to out of control scope changes. Now that you’ve had a chance to read these thoughts on the use-case model and project scope, please let me know your feedback based on your experience.

    Read the article

  • My co-worker has not been doing such a good job for the past decade. What do I do? [closed]

    - by stijn
    Possible Duplicate: How do I approach a coworker about his or her code quality? I started working with him almost a decade ago and back then I had never really programmed before, being a young hardware engineer. Right now however I have made quite some progress in all areas being part of software design and i am much, much more skilled than my co-worker who is 15 years older and has been programming more than twice as long. He is super nice and definitely smart enough, but lately his lack of skill and performance are starting to drag me down because we're more and more working on the same codebase. And soon we are going to do a quite ambitious start from scratch creating a whole new hard/software system. I feel it is time to address all issues now, but i do not know how to start. Here are some of the things that I would like to see him improve on: no consistent usage of style, spaces nor tabs (eg if(something ) a =b ) adds newlines around pieces of code to make it easier to read, then commits those with messages like 'no changes made' overall commit messages are useless and so are most of the comments, if there are any (eg 'remove solves for bug Rik' if Rik reported a bug). There is no function/class documentation. lots of spelling errors, in both English and native language, which sometimes are mixed 6/7/8 level deep deep nesting is no exception, a lot of functions start with one level already like if(ptr!=Null){ even when ptr is the result of allocation via new in the constructor numerous source files have over 10k lines of those lines, a major part is simply a result of copy-pasting functionality instead of using a function. This includes copying comments so we end up with 50 occurrences of var=NULL; //TODO TEST this!!!!!!! another part is hundreds of lines of dead code knows what versioning does, yet comments out old code and places new code underneath it when making changes coding skills are below par, especially for the type of rather high precision applications we do. Yet somehow, after a lot of trying and testing, stuff starts to work. But then breaks again some time later because every change casues a waterfall effect. violates every single item in the C++ FAQ lite, practices every bad practice I can think of still doesn't know how to properly use the debugger, but spends hours inspecting messy logfiles in notepad on a tiny laptop screen. Does not make any adjustments to the settings of the software he uses. Never uses keyboard shortcuts. does not seem to progress or learn new things at all. Work rather slow, mostly due to the lack of planning and incorrect usage of tools. How does one deal with this? For starters, how do I make him aware of all these problems? Should I tell the staff about it? And the next step, how to get him to learn new things and adopt another way of working?

    Read the article

  • Independent Research on 1500 Companies Reveals Challenges in Performance Visibility – Part 1

    - by ndwyouell
    At the end of May I was joined by Professor Andy Neely of Cambridge University on a webinar, with an audience of over 700, to discuss the results of this extensive study which covered 13 countries and nearly every commercial and industrial sector.  What stunned both of us was not so much the number listening but the 100 questions they asked in just 1 hour.  This certainly represents a record in my experience and for those that organized the webinar. So what was all the fuss about?  Well, to begin with this was a pretty big sample and it represented organizations with over $100m sales across the USA, Europe, Africa and the Middle East. It also delivered some pretty interesting results across a wide range of EPM subjects such as profitability, planning and reporting.  Let’s look at some of those findings. We kicked off with profitability, one of the key factors in driving performance, or that is what you would think, but in fact 82% of our respondents said they did not have complete visibility into the profitability of their organization. 91% of these went further to say that, not surprisingly, this lack of knowledge into the profitability has implications with over half citing 3 or more implications.  Implications cited included misallocated resources, revenue opportunities not maximized, erroneous decisions made and impaired financial performance.  Quite a list of implications, especially given the difficult economic circumstances many organizations are operating in at this time. So why is this?  Well other results in the study point to some of the potential reasons.  Firstly 59% of respondents that use spreadsheets use them for monitoring profitability and 93% of all managers responding to the study use spreadsheets to gather and analyze information.  This is an enormous proportion given the problems with using spreadsheets based performance management systems that have been widely talked about for many years.  For profitability analysis this is particularly important when you consider the typical requirement will be to allocate cost and revenue across 6+ dimensions based on many different allocation methods.  Not something that can be done easily in spreadsheets plus it gets to be a nightmare once you want to change allocations, run different scenarios and then change the basis of your planning and budgeting! It is no wonder so many organizations have challenges in performance visibility. My next blog will look at the fragmented nature of many organizations’ planning.  In the meantime if you want to read the complete report on the research go to: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=10077790&src=7038701&Act=29

    Read the article

  • Should we persist with an employee still writing bad code after many years?

    - by user94986
    I've been assigned the task of managing developers for a well-established company. They have a single developer who specialises in all their C++ coding (since forever), but the quality of the work is abysmal. Code reviews and testing have revealed many problems, one of the worst being memory leaks. The developer has never tested his code for leaks, and I discovered that the applications could leak many MBs with only a minute of use. User's were reporting huge slowdowns, and his take was, "it's nothing to do with me - if they quit and restart, it's all good again." I've given him tools to detect and trace the leaks, and sat down with him for many hours to demonstrate how the tools are used, where the problems occur, and what to do to fix them. We're 6 months down the track, and I assigned him to write a new module. I reviewed it before it was integrated into our larger code base, and was dismayed to discover the same bad coding as before. The part that I find incomprehensible is that some of the coding is worse than amateurish. For example, he wanted a class (Foo) that could populate an object of another class (Bar). He decided that Foo would hold a reference to Bar, e.g.: class Foo { public: Foo(Bar& bar) : m_bar(bar) {} private: Bar& m_bar; }; But (for other reasons) he also needed a default constructor for Foo and, rather than question his initial design, he wrote this gem: Foo::Foo() : m_bar(*(new Bar)) {} So every time the default constructor is called, a Bar is leaked. To make matters worse, Foo allocates memory from the heap for 2 other objects, but he didn't write a destructor or copy constructor. So every allocation of Foo actually leaks 3 different objects, and you can imagine what happened when a Foo was copied. And - it only gets better - he repeated the same pattern on three other classes, so it isn't a one-off slip. The whole concept is wrong on so many levels. I would feel more understanding if this came from a total novice. But this guy has been doing this for many years and has had very focussed training and advice over the past few months. I realise he has been working without mentoring or peer reviews most of that time, but I'm beginning to feel he can't change. So my question is, would you persist with someone who is writing such obviously bad code?

    Read the article

  • Problem with MS DTC on SQL2008 win server 2k8 with linked server from sql2000 win server 2k

    - by user31648
    Hi, We have migrated our db from sql2000 win server 2k to sql2008 win server 2k8. We have linked server from sql2000 win server 2k. By our opinion the problem is with DTC and we have made a lot of setting that we found as solution for our problem, but still the problem exist. There is no any error or worning or information niether in the sql log nor in win event viewer. The application is hanging out and at the end the time out exception is shown. What we have done till now: Enable Network DTC Access with inbound and outbound with No Authentication Required on win 2k8 We have opened RPC dynamic port allocation through registry on 2k and 2k8 We have entered subkey TurnOffRpcSecurity in the registry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC and made it enable on 2k and 2k8 We have added exception for DTC in firewall for all entities What we have notice that when we restart SQL service and make the first try for our transaction the following is shown: "Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required." and after it: "Recovery of any in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC) has completed. This is an informational message only. No user action is required." Does someone have any idea what else can be done in order to solve the problem? Thanks in advance. Regards, Snezana

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Is there a way to tell SGE to run specific jobs as root on the execution node?

    - by Rick Reynolds
    The title kinda says it all... We're using SGE/OGE to submit jobs to a set of worker nodes that then do things with specific pieces of equipment. The programs and scripts that have been created that manipulate this equipment rely on running as root. I'd like SGE to handle allocation of resources in a way that is mindful of users, groups, projects, etc., but I also need the actual jobs to run with root permissions. I've read up on How can one run a prologue script as root in gridengine? to see if anything there was pertinent, but it seems that SGE is providing the "user@" kind of spec specifically for prolog and epilog kinds of actions. Is there any similar functionality for the job itself? I'm aware of su/sudo approaches, but that won't really work in this environment because the sudoers file isn't globally managed (i.e. I'd have to add a whole set of users to /etc/sudoers on lots of machines). I'm currently looking into a setuid kind of solution, but that would definitely be an unnecessary kind of work-around if SGE provides me a way to declare that a specific job (or jobs in a specific queue) always needs to run with a specific user's rights.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >