Search Results

Search found 17324 results on 693 pages for 'memory warning'.

Page 146/693 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • C# class code loaded in RAM ?

    - by Spi1988
    hi, I would like to know whether the actual code of a C# class gets loaded in RAM when you instantiate the class? So for example if I have 2 Classes CLASS A , CLASS B, where class A has 10000 lines of code but just 1 field, an int. And class B has 10 lines of code and also 1 field an int as well. If I instantiate Class A will it take more RAM than Class B due to its lines of code ? A supplementary question, If the lines of code are loaded in memory together with the class, will they be loaded for every instance of the class? or just once for all the instances? Thanks in advance.

    Read the article

  • What is the fastest way to write hundreds of files to disk using C#?

    - by Ehsan
    My program should write hundreds of files to disk, received by external resources (network) each file is a simple document that I'm currently store it with the name of GUID in a specific folder but creating hundred files, writing, closing is a lengthy process. Is there any better way to store these amount of files to disk? I've come to a solution, but I don't know if it is the best. First, I create 2 files, one of them is like allocation table and the second one is a huge file storing all the content of my documents. But reading from this file would be a nightmare; maybe a memory-mapped file technique could help. Could working with 30GB or more create a problem?

    Read the article

  • Split query result by half in TSQL (obtain 2 resultsets/tables)

    - by rubdottocom
    I have a query that returns a large number of heavy rows. When I transform this rows in a list of CustomObject I have a big memory peak, and this transformation is made by a custom dotnet framework that I can't modify. I need to retrieve a less number of rows to do "the transform" in two passes and then avoid the momery peak. How can I split the result of a query by half? I need to do it in DB layer. I thing to do a "Top count(*)/2" but how to get the other half? Thank you!

    Read the article

  • What's the best way to return something like a collection of `std::auto_ptr`s in C++03?

    - by Billy ONeal
    std::auto_ptr is not allowed to be stored in an STL container, such as std::vector. However, occasionally there are cases where I need to return a collection of polymorphic objects, and therefore I can't return a vector of objects (due to the slicing problem). I can use std::tr1::shared_ptr and stick those in the vector, but then I have to pay a high price of maintaining separate reference counts, and object that owns the actual memory (the container) no longer logically "owns" the objects because they can be copied out of it without regard to ownership. C++0x offers a perfect solution to this problem in the form of std::vector<std::unique_ptr<t>>, but I don't have access to C++0x. Some other notes: I don't have access to C++0x, but I do have TR1 available. I would like to avoid use of Boost (though it is available if there is no other option) I am aware of boost::ptr_container containers (i.e. boost::ptr_vector), but I would like to avoid this because it breaks the debugger (innards are stored in void *s which means it's difficult to view the object actually stored inside the container in the debugger)

    Read the article

  • question related to Iphone autorelease usage

    - by user524331
    Could someone help me please understand how allocation and memory management is done and handled in following scenario. i am giving a Psuedo code example and question thats troubling me is inline below: interface first { NSDecimalNumber *number1; } implementation ..... -(void) dealloc { [number1 release]; [super dealloc]; } ================================= interface second { NSDecimalNumber *number2; } implementation second ..... - (First*) check { First *firstObject = [[[First alloc] init] autorelease]; number1 = [[NSDecimalNumber alloc] initWithInteger:0]; **// do i need to autorelease number1 as well?** return firstObject; }

    Read the article

  • C++: Delete a struct?

    - by Rosarch
    I have a struct that contains pointers: struct foo { char* f; int* d; wchar* m; } I have a vector of shared pointers to these structs: vector<shared_ptr<foo>> vec; vec is allocated on the stack. When it passes out of scope at the end of the method, its destructor will be called. (Right?) That will in turn call the destructor of each element in the vector. (Right?) Does calling delete foo delete just the pointers such as foo.f, or does it actually free the memory from the heap?

    Read the article

  • What is the difference between these two ways of creating NSStrings?

    - by adame
    NSString *myString = @"Hello"; NSString *myString = [NSString stringWithString:@"Hello"]; I understand that using method (1) creates a pointer to a string literal that is defined as static memory (and cannot be deallocated) and that using (2) creates an NSString object that will be autoreleased. Is using method (1) bad? What are the major differences? Is there any instances where you would want to use (1)? Is there a performance difference? P.S. I have searched extensively on Stack Overflow and while there are questions on the same topic, none of them have answers to the questions I have posted above.

    Read the article

  • C++ destructor issue with std::vector of class objects

    - by Nigel
    I am confused about how to use destructors when I have a std::vector of my class. So if I create a simple class as follows: class Test { private: int *big; public: Test () { big = new int[10000]; } ~Test () { delete [] big; } }; Then in my main function I do the following: Test tObj = Test(); vector<Test> tVec; tVec.push_back(tObj); I get a runtime crash in the destructor of Test when I go out of scope. Why is this and how can I safely free my memory?

    Read the article

  • How to store double using SharedPrefrences?

    - by user3924167
    I am having trouble storing a double in the phone's memory. What are my other options if this isnt possible. Basically what the code is aiming to do using sharedprefrences is take the stored value of "Alcohol" spending and then add whatever the input is in the editText to it and then store that new value for the next time. Running total of spending on alcohol **Can someone please help with this issue and be detailed where x y & z should go in the project. The user selects from a spinner, which works. public void addInput(){ double dblCostInput = Double.valueOf(inputBox.getText().toString()); String strCategories= spinnerCategories.getSelectedItem().toString(); if(strCategories.equals("Alcohol")) { alcoholSpend = alcoholSpend + dblCostInput; inputBox.setText(""); nextInput(); inputBox.setText("Your Spending on"+strCategories+" is: " +d.format(alcoholSpend)); }

    Read the article

  • Why does Apple use Objective-C for iPhone development? (App Store)

    - by Luca Matteis
    I'm interested to know your opinion on why Apple uses a language such as Objective-C for app development. Does Apple's app store allow apps written only in this language? Does apple even look at your source-code or does it just care of the binary output? I learned that most of their app rejection (in the app store) is based upon apps crashing (probably memory leaks in which Objective-c is not very efficient unless you use a GC). Why not let developers use a safer language, like a scripting language? I think these are important questions for a developer (I don't even use Apple's products) because it seems like Apple's app store is the MOST successful app sale place on the web.

    Read the article

  • Allocate from buffer in C

    - by Grimless
    I am building a simple particle system and want to use a single array buffer of structs to manage my particles. That said, I can't find a C function that allows me to malloc() and free() from an arbitrary buffer. Here is some pseudocode to show my intent: Particle* particles = (Particle*) malloc( sizeof(Particle) * numParticles ); Particle* firstParticle = <buffer_alloc>( particles ); initialize_particle( firstParticle ); // ... Some more stuff if (firstParticle->life < 0) <buffer_free>( firstParticle ); // @ program's end free(particles); Where <buffer_alloc> and <buffer_free> are functions that allocate and free memory chunks from arbitrary pointers (possibly with additional metadata such as buffer length, etc.). Do such functions exist and/or is there a better way to do this? Thank you!

    Read the article

  • .NET Garbage Collection behavior (with DataTable)

    - by gmac
    I am wonder why after creating a very simple DataTable and then setting it to null why Garbage Collection does not clear out all the memory used by that DataTable. Here is an example. The variable Before should be equal to Removed but it is not. { long Before = 0, After = 0, Removed = 0, Collected = 0; Before = GC.GetTotalMemory(true); DataTable dt = GetSomeDataTableFromSql(); After = GC.GetTotalMemory(true); dt = null; Removed = GC.GetTotalMemory(true); GC.Collect(); Collected = GC.GetTotalMemory(true); } Gives the following results. Before = 388116 After = 731248 Removed = 530176 Collected = 530176

    Read the article

  • Which of these Array Initializations is better in Ruby?

    - by Bragaadeesh
    Hi, Which of these two forms of Array Initialization is better in Ruby? Method 1: DAYS_IN_A_WEEK = (0..6).to_a HOURS_IN_A_DAY = (0..23).to_a @data = Array.new(DAYS_IN_A_WEEK.size).map!{ Array.new(HOURS_IN_A_DAY.size) } DAYS_IN_A_WEEK.each do |day| HOURS_IN_A_DAY.each do |hour| @data[day][hour] = 'something' end end Method 2: DAYS_IN_A_WEEK = (0..6).to_a HOURS_IN_A_DAY = (0..23).to_a @data = {} DAYS_IN_A_WEEK.each do |day| HOURS_IN_A_DAY.each do |hour| @data[day] ||= {} @data[day][hour] = 'something' end end The difference between the first method and the second method is that the second one does not allocate memory initially. I feel the second one is a bit inferior when it comes to performance due to the numerous amount of Array copies that has to happen. However, it is not straight forward in Ruby to find what is happening. So, if someone can explain me which is better, it would be really great! Thanks

    Read the article

  • AS3/AIR: Managing Run-Time Image Data

    - by grey
    I'm developing a game with AS3 and AIR. I will have a large-ish quantity of images that I need to load for display elements. It would be nice not to embed all of the images that the game needs, thereby avoiding having them all in memory at once. That's okay in smaller projects, but doesn't make sense here. I'm curious about strategies for loading images during run time. Since all of the files are quite small and local ( in my current project ) loading them on request might be the best solution, but I'd like to hear what ideas people have for managing this. For bonus points, I'm also curious about solutions for loading images on-demand server-side as well.

    Read the article

  • Qt : crash due to delete (trying to handle exceptions...)

    - by Seub
    I am writing a program with Qt, and I would like it to show a dialog box with a Exit | Restart choice whenever an error is thrown somewhere in the code. What I did causes a crash and I really can't figure out why it happens, I was hoping you could help me understanding what's going on. Here's my main.cpp: #include "my_application.hpp" int main(int argc, char *argv[]) { std::cout << std::endl; My_Application app(argc, argv); return app.exec(); } And here's my_application:hpp: #ifndef MY_APPLICATION_HPP #define MY_APPLICATION_HPP #include <QApplication> class Window; class My_Application : public QApplication { public: My_Application(int& argc, char ** argv); virtual ~My_Application(); virtual bool notify(QObject * receiver, QEvent * event); private: Window *window_; void exit(); void restart(); }; #endif // MY_APPLICATION_HPP Finally, here's my_application.cpp: #include "my_application.hpp" #include "window.hpp" #include <QMessageBox> My_Application::My_Application(int& argc, char ** argv) : QApplication(argc, argv) { window_ = new Window; window_->setAttribute(Qt::WA_DeleteOnClose, false); window_->show(); } My_Application::~My_Application() { delete window_; } bool My_Application::notify(QObject * receiver, QEvent * event) { try { return QApplication::notify(receiver, event); } catch(QString error_message) { window_->setEnabled(false); QMessageBox message_box; message_box.setWindowTitle("Error"); message_box.setIcon(QMessageBox::Critical); message_box.setText("The program caught an unexpected error:"); message_box.setInformativeText("What do you want to do? <br>"); QPushButton *restart_button = message_box.addButton(tr("Restart"), QMessageBox::RejectRole); QPushButton *exit_button = message_box.addButton(tr("Exit"), QMessageBox::RejectRole); message_box.setDefaultButton(restart_button); message_box.exec(); if ((QPushButton *) message_box.clickedButton() == exit_button) { exit(); } else if ((QPushButton *) message_box.clickedButton() == restart_button) { restart(); } } return false; } void My_Application::exit() { window_->close(); //delete window_; return; } void My_Application::restart() { window_->close(); //delete window_; window_ = new Window; window_->show(); return; } Note that the line window_->setAttribute(Qt::WA_DeleteOnClose, false); means that window_ (my main window) won't be deleted when it is closed. The code I've written above works, but as far as I understand, there's a memory leak: I should uncomment the line //delete window_; in My_Application::exit() and My_Application::restart(). But when I do that, the program crashes when I click restart (or exit but who cares). (I'm not sure this is useful, in fact it might be misleading, but here's what my debugger tells me: a segmentation fault occurs in QWidgetPrivate::PaintOnScreen() const which is called by a function called by a function... called by My_Application::notify()) When I do some std::couts, I notice that the program runs through the entire restart() function and in fact through the entire notify() function before it crashes. I have no idea why it crashes. Thanks in advance for your insights! Update: I've noticed that My_Application::notify() is called very often. For example, it is called a bunch of times while the error dialog box is open, also during the execution of the restart function. The crash actually occurs in the subfunction QApplication::notify(receiver, event). This is not too surprising in light of the previous remark (the receiver has probably been deleted) But even if I forbid the function My_Application::notify() to do anything while restart() is executed, it still crashes (after having called My_Application::notify() a bunch of times, like 15 times, isn't that weird)? How should I proceed? Maybe I should say (to make the question slightly more relevant) that my class My_Application also has a "restore" function, which I've not copied here to try to keep things short. If I just had that restart feature I wouldn't bother too much, but I do want to have that restore feature. I should also say that if I keep the code with the "delete window_" commented, the problem is not only a memory leak, it still crashes sometimes apparently. There must surely be a way to fix this! But I'm clueless, I'd really appreciate some help! Thanks in advance.

    Read the article

  • DOMDocument::load in PHP 5

    - by Abs
    Hello all, I open a 10MB+ XML file several times in my script in different functions: $dom = DOMDocument::load( $file ) or die('couldnt open'); 1) Is the above the old style of loading a document? I am using PHP 5. Oppening it statically? 2) Do I need to close the loading of the XML file, if possible? I suspect its causing memory problems because I loop through the nodes of the XML file several thousand times and sometimes my script just ends abruptly. Thanks all for any help

    Read the article

  • AppEngine JRuby - OutOfMemoryError: Java heap space - can it be solved?

    - by elado
    I use AppEngine JRuby on Rails (SDK version 1.3.3.1) - a problem I encounter often is that after a few requests the server is getting really SLOW, until it dies and throws OutOfMemoryError on the terminal (OSX). The requests themselves are very lightweight, not more than looking for an entity or saving it, using DataMapper. On appspot, this problem is not happening. Is there any way to enlarge the heap space for JRuby? The exception log: Exception in thread "Timer-2" java.lang.OutOfMemoryError: Java heap space Apr 29, 2010 8:08:22 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Error for /users/close_users java.lang.OutOfMemoryError: Java heap space at org.jruby.RubyHash.internalPut(RubyHash.java:480) at org.jruby.RubyHash.internalPut(RubyHash.java:461) at org.jruby.RubyHash.fastASet(RubyHash.java:837) at org.jruby.RubyArray.makeHash(RubyArray.java:2998) at org.jruby.RubyArray.makeHash(RubyArray.java:2992) at org.jruby.RubyArray.op_diff(RubyArray.java:3103) at org.jruby.RubyArray$i_method_1_0$RUBYINVOKER$op_diff.call(org/jruby/RubyArray$i_method_1_0$RUBYINVOKER$op_diff.gen) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:146) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:373) at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:346) at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:303) at org.jruby.runtime.Block.yield(Block.java:194) at org.jruby.RubyArray.collect(RubyArray.java:2354) at org.jruby.RubyArray$i_method_0_0$RUBYFRAMEDINVOKER$collect.call(org/jruby/RubyArray$i_method_0_0$RUBYFRAMEDINVOKER$collect.gen) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:115) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:122) at org.jruby.ast.CallNoArgBlockNode.interpret(CallNoArgBlockNode.java:64) at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:61) at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:98) at org.jruby.ast.BeginNode.interpret(BeginNode.java:83) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:201) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:183)

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • Message Handlers and the WeakReference issue

    - by user1058647
    The following message Handler works fine receiving messages from my service... private Handler handler = new Handler() { public void handleMessage(Message message) { Object path = message.obj; if (message.arg1 == 5 && path != null) //5 means its a single mapleg to plot on the map { String myString = (String) message.obj; Gson gson = new Gson(); MapPlot mapleg = gson.fromJson(myString, MapPlot.class); myMapView.getOverlays().add(new DirectionPathOverlay(mapleg.fromPoint, mapleg.toPoint)); mc.animateTo(mapleg.toPoint); } else { if (message.arg1 == RESULT_OK && path != null) { Toast.makeText(PSActivity.this, "Service Started" + path.toString(), Toast.LENGTH_LONG).show(); } else { Toast.makeText(PSActivity.this,"Service error" + String.valueOf(message.arg1), Toast.LENGTH_LONG).show(); } } }; }; However, even though it tests out alright in the AVD (I'm feeding it a large KML file via DDMS) the "object path = message.obj;" line has a WARNING saying "this Handler class should be static else leaks might occur". But if I say "static Handler handler = new Handler()" it won't compile complaining that I "cannot make a static reference to a non-static field myMapView. If I can't make such references, I can't do anything useful. This led me into several hours of googling around on this issue and learning more about weakReferences than I ever wanted to know. The often found reccomendation I find is that I should replace... private Handler handler = new Handler() with static class handler extends Handler { private final WeakReference<PSActivity> mTarget; handler(PSActivity target) { mTarget = new WeakReference<PSActivity>(target); } But this won't compile still complaining that I can't make a static reference to a non-dtatic field. So, my question a week or to ago was "how can I write a message handler for android so my service can send data to my activity. Even though I have working code, the question still stands with the suffix "without leaking memory". Thanks, Gary

    Read the article

  • What does the Kernel Virtual Memory of each process contain?

    - by claws
    When say 3 programs (executables) are loaded into memory the layout might look something like this: I've following questions: Is the concept of Virtual Memory limited to user processes? Because, I am wondering where does the Operating System Kernel, Drivers live? How is its memory layout? I know its operating system specific make your choice (windows/linux). They say, on a 32 bit machine in a 4GB address space. Half of it (or more recently 1GB) is occupied by kernel. I can see in this diagram that "Kernel Virtual memory" is occupying 0xc0000000 - 0xffffffff (= 1 GB). Are they talking about this? or is it something else? Just want to confirm. What exactly does the Kernel Virtual Memory of each of these processes contain? What is its layout? When we do IPC we talk about shared memory. I don't see any memory shared between these processes. Where does it live? Resources (files, registries in windows) are global to all processes. So, the resource/file handle table must be in some global space. Which area would that be in? Where can I know more about this kernel side stuff.

    Read the article

  • Creating UIButtons

    - by Ralphonzo
    During loadView I am creating 20 UIButtons that I would like to change the title text of depending on the state of a UIPageControl. I have a pre-save plist that is loaded into a NSArray called arrChars on the event of the current page changing, I set the buttons titles to their relevant text title from the array. The code that does this is: for (int i = 1; i < (ButtonsPerPage + 1); i++) { UIButton *uButton = (UIButton *)[self.view viewWithTag:i]; if(iPage == 1) { iArrPos = (i - 1); } else { iArrPos = (iPage * ButtonsPerPage) + (i - 1); } [uButton setAlpha:0]; NSLog(@"Trying: %d of %d", iArrPos, [self.arrChars count]); if (iArrPos >= [self.arrChars count]) { [uButton setTitle: @"" forState: UIControlStateNormal]; } else { NSString *value = [[NSString alloc] initWithFormat:@"%@", [self.arrChars objectAtIndex:iArrPos]]; NSLog(@"%@", value); [uButton setTitle: [[NSString stringWithFormat:@"%@", value] forState: UIControlStateNormal]; [value release]; //////Have tried: //////[uButton setTitle: value forState: UIControlStateNormal]; //////Have also tried: //////[uButton setTitle: [self.arrChars objectAtIndex:iArrPos] forState: UIControlStateNormal]; //////Have also also tried: //////[uButton setTitle: [[self.arrChars objectAtIndex:iArrPos] autorelease] forState: UIControlStateNormal]; } [uButton setAlpha:1]; } When setting the Title of a button it does not appear to be autoreleasing the previous title and the allocation goes up and up. What am I doing wrong? I have been told before that tracking things by allocations is a bad way to chase leaks because as far as I can see, the object is not leaking in Instruments but my total living allocations continue to climb until I get a memory warning. If there is a better way to track there I would love to know.

    Read the article

  • How costly performance-wise are these actions in iPhone objective-C?

    - by Alex Gosselin
    This is really a few questions in one, I'm wondering what the performance cost is for these things, as I haven't really been following a best practice of any sort for these. The answers may also be useful to other readers, if somebody knows these. (1) If I need the core data managed object context, is it bad to use #import "myAppDelegate.h" //farther down in the code: NSManagedObjectContext *context = [(myAppDelegate.h*)[[UIApplication sharedApplication] delegate] managedObjectContext]; as opposed to leaving the warning you get if you don't cast the delegate? (2) What is the cheapest way to hard-code a string? I have been using return @"myString"; on occasion in some functions where I need to pass it to a variety of places, is it better to do it this way: static NSString *str = @"myString"; return str; (3) How costly is it to subclass an object i wrote vs. making a new one, in general? (4) When I am using core data and navigating through a hierarchy of some sort, is it necessary to turn things back into faults somehow after I read some info from them? or is this done automatically? Thanks for any help.

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • log4j prints all levels

    - by Florian
    Hello, I've got log4j configured on my Java project with the following log4j.properties: log4j.rootLogger=WARNING, X log4j.appender.X=org.apache.log4j.ConsoleAppender log4j.appender.X.layout=org.apache.log4j.PatternLayout log4j.appender.X.layout.ConversionPattern=%p %m %n log4j.logger.org.hibernate.SQL=WARNING log4j.logger.com.****.services.clarity.dao.impl=WARNING log4j.logger.com.****.services.clarity.controller=WARNING log4j.logger.com.****.services.clarity.services.impl=WARNING log4j.logger.com.****.services.clarity.feeds.impl=WARNING As configured, it should only print WARNING messages, however it prints all levels to DEBUG. any ideas where this can come from ? Thanks !

    Read the article

  • Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\INETPUB\VHOSTS\gaehambuilders.com

    - by Kiran RS
    Why I'm getting an error like this - Warning: mail() [function.mail]: SMTP server response: 530 Relaying not allowed - sender domain not local in D:\INETPUB\VHOSTS\gaehambuilders.com\httpdocs\contacts.php on line 120 ? Here is my php code, if(isset($_POST['send'])) //if "email" is filled out, send email { //send email $name=$_REQUEST['name']; $email=$_POST['email']; $cnum=$_REQUEST['cnum']; $enq=$_REQUEST['enq']; $email1=$_REQUEST['email']; $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = $email1; $headers = "From:" . $from; mail($to,$subject,$message,$headers); ? alert ("Enquiry form submited successfully ! We'll get back you soon "); Thanks in advance!

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >