Search Results

Search found 6387 results on 256 pages for 'cpu allocation'.

Page 116/256 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • PuTTY: how to properly emulate -t option

    - by John Sonderson
    On Linux the ssh command has a -t option whose man page reads: Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocate, even if ssh has no local tty. I would like to use this same option with PuTTY on Windows. In particular, I can see that PuTTY has a bunch of options under: Category - Connection - SSH - TTY and suspect it might be possible to achieve the same behavior via some of the (NUMEROUS!) settings found on this screen. Anyone know how to configure the following command: ssh -t USER,[email protected] create Thanks!

    Read the article

  • ??????????? - Java SE Embedded 8

    - by kshimizu-Oracle
    Java?OS??????1?????????????????????????????????3?????????????? HEAP: Java????????????????????????????????? NON-HEAP: NON-HEAP????JVM???????????????????Code Cache?Metaspace???2????????????? Code Cache: ????JIT??????????????????????????? Metaspace: HEAP??????????????????????????   JavaVM??????????: VM?????????????????? ??????????????? ????????????????????????????????????????????????????????????????????????? HEAP?Java Mission Control???????????????????? (????)? ????Java SE?????????????API????????????????????????????????????? Mission Control?????API?????????????????????????????????API??????????????? HEAP???????????? VM????????"-Xmx"???????????????? java.lang.Runtime.maxMemory(); ?????HEAP????????? ?????VM????????"-Xms"? ????????????? "-Xms"???????"-Xmx"?????????? java.lang.Runtime.totalMemory(); ???????????HEAP????????????? java.lang.Runtime.freeMemory(); ??NON-HEAP???????????? API??????????? Java Mission Control?????????? ????????????Java Mission Control??????????????????????? ????"NON_HEAP"?????????NON-HEAP?????? ???? HEAP????NON-HEAP?????????????? Java VM???????????????????????????????????????? ?????????????????????????????????? ????HEAP/NON-HEAP?????????????????????????? OS?????????????? Linux???????procfs?Java??????????????????? (VmHWM or VmRSS) ????? ????HEAP/NON-HEAP??????????????????????????? ?????????????????? ??????JVM?????????????????? ?????????????????JVM???????????????????? ???JVM?????? ????????????? Embedded??JVM?????????? ??Embedded???Oracle JVM??????CPU????????????????????????????????????????? ??????CPU??????????????????????????????????????? Minimal/Client/Server??JVM???????????????? ????JVM??????????????????? ??????Compact????????????????? ? 2 - 3?????? Concept Guide (http://docs.oracle.com/javase/8/embedded/embedded-concepts/basic-concepts.htm) ???????? ??JVM??????????? ????????????????????? -Xms: ??????????? ?????????? ?????????????????????????????????????????????????? -Xmx: ??????????? -XX:ReservedCodeCacheSize: Code Cache??????? ?) JIT??????????????Code Cache????????????0???????? -Xint: JIT??????????? ????????????? JIT?????????????????????? ????????????????? -Xss: ???????????????????? ????????????????????????? ????????????????????????????? -XX:CompileThreshold: JIT?????????????????????????????????? ?????????????????????? ????????? ?????????????????? Code Cache?????????? ?????????? ????????????????????? ????????????????????????? ??????????????????????? ?????????????????????

    Read the article

  • ????????!??????APEX??????????????

    - by Yuichi.Hayashi
    apex.oracle.com ????Oracle APEX?Web?????????????????????????????·???????????????? ???????????????????????????????????????·???????????????? ??Web?????????????????????????Amazon EC2??????????????????Oracle APEX??????????????????????????????????????????????? ???????????????????? (1)apex.oracle.com??Sign Up ????????????? (2) Next (3) ???????????????Next? (4) Workspace?(???????)??????Workspace????????????????Next? (?) Workspace: OracleDirect (5) Workspace?????????????? ???????????????????????????????????????????????????????????????????? ???????????????Next? (?) New schema to create: DIRECT Initial Space Allocation (MB): 25 (6) ???????????????????(???????To Try ??????????)?Next? (7) ???????????Code???????????????Submit Reqest?????? (8) ?????????????????????? ?????????????????????????????????? ?????: Approved: account request for ···· ··· ?????: Your request for an account has been approved. Workspace :ORACLEDIRECT User ID :[email protected] Please click on the link: http://apex.oracle.com/pls/apex/f?·········· to complete the approval process and receive your credentials. (9) ???????????????????????????? ?????:???: Yuichi Hayashi??????·????? ?????: ?????·?????????????? ???????: ORACLEDIRECT ????ID: [email protected] ?????: ******** ??????http://apex.oracle.com/pls/apex/?????????? (10) ??????????http//apex.oracle.com/pls/apex/?????????? (11) ???????????? (12) ??????????? ???????????????(???????)?????????????????????? APEX(Oracle Application Express)????~??????????????????????

    Read the article

  • C program runs in Cygwin but not Linux (Malloc)

    - by Shawn
    I have a heap allocation error that I cant spot in my code that is picked up on vanguard/gdb on Linux but runs perfectly on a Windows cygwin environment. I understand that Linux could be tighter with its heap allocation than Windows but I would really like to have a response that discovers the issue/possible fix. I'm also aware that I shouldn't typecast malloc in C but it's a force of habit and doesn't change my problem from happening. My program actually compiles without error on both Linux & Windows but when I run it in Linux I get a scary looking result: malloc.c:3074: sYSMALLOc: Assertion `(old_top == (((mbinptr) (((char *) &((av)-bins[((1) - 1) * 2])) - __builtin_offsetof (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long) (old_size) = (unsigned long)((((__builtin_offsetof (struct malloc_chunk, fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 * (sizeof(size_t))) - 1))) && ((old_top)-size & 0x1) && ((unsigned long)old_end & pagemask) == 0)' failed. Aborted Attached snippet from my code that is being pointed to as the error for review: /* Main */ int main(int argc, char * argv[]) { FILE *pFile; unsigned char *buffer; long int lSize; pFile = fopen ( argv[1] , "r" ); if (pFile==NULL) {fputs ("File error on arg[1]",stderr); return 1;} fseek (pFile , 0 , SEEK_END); lSize = ftell (pFile); rewind (pFile); buffer = (char*) malloc(sizeof(char) * lSize+1); if (buffer == NULL) {fputs ("Memory error",stderr); return 2;} bitpair * ppairs = (bitpair *) malloc(sizeof(bitpair) * (lSize+1)); //line 51 below calcpair(ppairs, (lSize+1)); /* irrelevant stuff */ fclose(pFile); free(buffer); free(ppairs); } typedef struct { long unsigned int a; //not actual variable names... Yes I need them to be long unsigned long unsigned int b; long unsigned int c; long unsigned int d; long unsigned int e; } bitpair; void calcpair(bitpair * ppairs, long int bits); void calcPairs(bitpair * ppairs, long int bits) { long int i, top, bot, var_1, var_2; int count = 0; for(i = 0; i < cs; i++) { top = 0; ppairs[top].e = 1; do { bot = count; count++; } while(ppairs[bot].e != 0); ppairs[bot].e = 1; var_1 = bot; var_2 = top; calcpair * bp = &ppairs[var_2]; bp->a = var_2; bp->b = var_1; bp->c = i; bp = &ppairs[var_1]; bp->a = var_2; bp->b = var_1; bp->c = i; } return; } gdb reports: free(): invalid pointer: 0x0000000000603290 * valgrind reports the following message 5 times before exiting due to "VALGRIND INTERNAL ERROR" signal 11 (SIGSEGV): Invalid read of size 8 ==2727== at 0x401043: calcPairs (in /home/user/Documents/5-3/ubuntu test/main) ==2727== by 0x400C9A: main (main.c:51) ==2727== Address 0x5a607a0 is not stack'd, malloc'd or (recently) free'd

    Read the article

  • new and delete operator overloading

    - by Angus
    I am writing a simple program to understand the new and delete operator overloading. How is the size parameter passed into the new operator? For reference, here is my code: #include<iostream> #include<stdlib.h> #include<malloc.h> using namespace std; class loc{ private: int longitude,latitude; public: loc(){ longitude = latitude = 0; } loc(int lg,int lt){ longitude -= lg; latitude -= lt; } void show(){ cout << "longitude" << endl; cout << "latitude" << endl; } void* operator new(size_t size); void operator delete(void* p); void* operator new[](size_t size); void operator delete[](void* p); }; void* loc :: operator new(size_t size){ void* p; cout << "In overloaded new" << endl; p = malloc(size); cout << "size :" << size << endl; if(!p){ bad_alloc ba; throw ba; } return p; } void loc :: operator delete(void* p){ cout << "In delete operator" << endl; free(p); } void* loc :: operator new[](size_t size){ void* p; cout << "In overloaded new[]" << endl; p = malloc(size); cout << "size :" << size << endl; if(!p){ bad_alloc ba; throw ba; } return p; } void loc :: operator delete[](void* p){ cout << "In delete operator - array" << endl; free(p); } int main(){ loc *p1,*p2; int i; cout << "sizeof(loc)" << sizeof(loc) << endl; try{ p1 = new loc(10,20); } catch (bad_alloc ba){ cout << "Allocation error for p1" << endl; return 1; } try{ p2 = new loc[10]; } catch(bad_alloc ba){ cout << "Allocation error for p2" << endl; return 1; } p1->show(); for(i = 0;i < 10;i++){ p2[i].show(); } delete p1; delete[] p2; return 0; }

    Read the article

  • Var null and not an object when using document.getElementById

    - by Dean
    Hi, I'm doing some work in HTML and JQuery. I have a problem of my textarea and submit button not appearing after the radio button is selected. My HTML looks like this: <html> <head><title>Publications Database | Which spotlight for Publications</title> <script type="text/javascript" src="./jquery.js"></script> <script src="./addSpotlight.js" type="text/javascript" charset="utf-8"></script> </head> <body> <div class="wrapper"> <div class="header"> <div class="headerText">Which spotlight for Publications</div> </div> <div class="mainContent"> <p>Please select the publication that you would like to make the spotlight of this month:</p> <form action="addSpotlight" method="POST" id="form" name="form"> <div class="div29" id="div29"><input type="radio" value="29" name="publicationIDs" >A System For Dynamic Server Allocation in Application Server Clusters, IEEE International Symposium on Parallel and Distributed Processsing with Applications, 2008 </div> <div class="div30" id="div30"><input type="radio" value="30" name="publicationIDs" >Analysing BitTorrent's Seeding Strategies, 7th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC-09), 2009 </div> <div class="div31" id="div31"><input type="radio" value="31" name="publicationIDs" >The Effect of Server Reallocation Time in Dynamic Resource Allocation, UK Performance Engineering Workshop 2009, 2009 </div> <div class="div32" id="div32"><input type="radio" value="32" name="publicationIDs" >idk, hello, 1992 </div> <div class="div33" id="div33"><input type="radio" value="33" name="publicationIDs" >sad, safg, 1992 </div> <div class="abstractWriteup" id="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </form> </div> </div> </body> </html> My javascript looks like this: $(document).ready( function() { $('.abstractWriteup').hide(); addAbstract(); }); function addAbstract() { var abstractWU = document.getElementById('.abstractWriteup'); $("input[name='publicationIDs']").change(function() { var abstractWU = document.getElementById('.abstractWriteup'); var classNameOfSelected = $("input[name='publicationIDs']").val(); var radioSelected = document.getElementById("div"+classNameOfSelected); var parentDiv = radioSelected.parentNode; parentDiv.insertBefore(radioSelected, abstractWU.nextSibling); $('.abstractWriteup').show(); });}; I have developed this by using Node#insertBefore. When I have had it working it has been rearranging the radio buttons. Thanks in Advance Dean

    Read the article

  • Is there a potential for resource leak/double free here?

    - by nhed
    The following sample (not compiled so I won't vouch for syntax) pulls two resources from resource pools (not allocated with new), then "binds" them together with MyClass for the duration of a certain transaction. The transaction, implemented here by myFunc, attempts to protect against leakage of these resources by tracking their "ownership". The local resource pointers are cleared when its obvious that instantiation of MyClass was successful. The local catch, as well as the destructor ~MyClass return the resources to their pool (double-frees are protected by teh above mentioned clearing of the local pointers). Instantiation of MyClass can fail and result in an exception at two steps (1) actual memory allocation, or (2) at the constructor body itself. I do not have a problem with #1, but in the case of #2, if the exception is thrown AFTER m_resA & m_resB were set. Causing both the ~MyClass and the cleanup code of myFunc to assume responsibility for returning these resources to their pools. Is this a reasonable concern? Options I have considered, but didn't like: Smart pointers (like boost's shared_ptr). I didn't see how to apply to a resource pool (aside for wrapping in yet another instance). Allowing double-free to occur at this level but protecting at the resource pools. Trying to use the exception type - trying to deduce that if bad_alloc was caught that MyClass did not take ownership. This will require a try-catch in the constructor to make sure that any allocation failures in ABC() ...more code here... wont be confused with failures to allocate MyClass. Is there a clean, simple solution that I have overlooked? class SomeExtResourceA; class SomeExtResourceB; class MyClass { private: // These resources come out of a resource pool not allocated with "new" for each use by MyClass SomeResourceA* m_resA; SomeResourceB* m_resB; public: MyClass(SomeResourceA* resA, SomeResourceB* resB): m_resA(resA), m_resB(resB) { ABC(); // ... more code here, could throw exceptions } ~MyClass(){ if(m_resA){ m_resA->Release(); } if(m_resB){ m_resB->Release(); } } }; void myFunc(void) { SomeResourceA* resA = NULL; SomeResourceB* resB = NULL; MyClass* pMyInst = NULL; try { resA = g_pPoolA->Allocate(); resB = g_pPoolB->Allocate(); pMyInst = new MyClass(resA,resB); resA=NULL; // ''ownership succesfully transfered to pMyInst resB=NULL; // ''ownership succesfully transfered to pMyInst // Do some work with pMyInst; ...; delete pMyInst; } catch (...) { // cleanup // need to check if resA, or resB were allocated prior // to construction of pMyInst. if(resA) resA->Release(); if(resB) resB->Release(); delete pMyInst; throw; // rethrow caught exception } }

    Read the article

  • How to call virtual function of an object in C++

    - by SoonDead
    I'm struggling with calling a virtual function in C++. I'm not experienced in C++, I mainly use C# and Java so I might have some delusions, but bear with me. I have to write a program where I have to avoid dynamic memory allocation if possible. I have made a class called List: template <class T> class List { public: T items[maxListLength]; int length; List() { length = 0; } T get(int i) const { if (i >= 0 && i < length) { return items[i]; } else { throw "Out of range!"; } }; // set the value of an already existing element void set(int i, T p) { if (i >= 0 && i < length) { items[i] = p; } else { throw "Out of range!"; } } // returns the index of the element int add(T p) { if (length >= maxListLength) { throw "Too many points!"; } items[length] = p; return length++; } // removes and returns the last element; T pop() { if (length > 0) { return items[--length]; } else { throw "There is no element to remove!"; } } }; It just makes an array of the given type, and manages the length of it. There is no need for dynamic memory allocation, I can just write: List<Object> objects; MyObject obj; objects.add(obj); MyObject inherits form Object. Object has a virtual function which is supposed to be overridden in MyObject: struct Object { virtual float method(const Input& input) { return 0.0f; } }; struct MyObject: public Object { virtual float method(const Input& input) { return 1.0f; } }; I get the elements as: objects.get(0).method(asdf); The problem is that even though the first element is a MyObject, the Object's method function is called. I'm guessing there is something wrong with storing the object in an array of Objects without dynamically allocating memory for the MyObject, but I'm not sure. Is there a way to call MyObject's method function? How? It's supposed to be a heterogeneous collection btw, so that's why the inheritance is there in the first place. If there is no way to call the MyObject's method function, then how should I make my list in the first place?

    Read the article

  • Visual Studio 2010 64-bit COM Interop Issue

    - by Adam Driscoll
    I am trying to add a VC6 COM DLL to our VS2010RC C# solution. The DLL was compiled with the VC6 tools to create an x86 version and was compiled with the VC7 Cross-platform tools to generate a VC7 DLL. The x86 version of the assembly works fine as long as the consuming C# project's platform is set to x86. It doesn't matter whether the x64 or the x86 version of the DLL is actually registered. It works with both. If the platform is set to 'Any CPU' I receive a BadImageFormatException on the load of the Interop.<name>.dll. As for the x64 version, I cannot even get the project to build. I receive the tlbimp error: TlbImp : error TI0000: A single valid machine type compatible with the input type library must be specified. Has anyone seen this issue? EDIT: I've done a lot more digging into this issue and think this may be a Visual Studio bug. I have a clean solution. I bring in my COM assembly with language agnostic 'Any CPU' selected. The process architecture of the resulting Interop DLL is x86 rather than MSIL. May have to make the Interop by hand for now to get this to work. If anyone has another suggestion let me know.

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • Cast exception when trying to create new Task Schedueler task.

    - by seaneshbaugh
    I'm attempting to create a new task in the Windows Task Scheduler in C#. What I've got so far is pretty much a copy and paste of http://bartdesmet.net/blogs/bart/archive/2008/02/23/calling-the-task-scheduler-in-windows-vista-and-windows-server-2008-from-managed-code.aspx Everything compiles just fine but come run time I get the following exception: Unable to cast COM object of type 'System.__ComObject' to interface type 'TaskScheduler.ITimeTrigger'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{B45747E0-EBA7-4276-9F29-85C5BB300006}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Here's all the code so you can see what I'm doing here without following the above link. TaskSchedulerClass Scheduler = new TaskSchedulerClass(); Scheduler.Connect(null, null, null, null); ITaskDefinition Task = Scheduler.NewTask(0); Task.RegistrationInfo.Author = "Test Task"; Task.RegistrationInfo.Description = "Just testing this out."; ITimeTrigger Trigger = (ITimeTrigger)Task.Triggers.Create(_TASK_TRIGGER_TYPE2.TASK_TRIGGER_DAILY); Trigger.Id = "TestTrigger"; Trigger.StartBoundary = "2010-05-12T06:15:00"; IShowMessageAction Action = (IShowMessageAction)Task.Actions.Create(_TASK_ACTION_TYPE.TASK_ACTION_SHOW_MESSAGE); Action.Id = "TestAction"; Action.Title = "Test Task"; Action.MessageBody = "This is a test."; ITaskFolder Root = Scheduler.GetFolder("\\"); IRegisteredTask RegisteredTask = Root.RegisterTaskDefinition("Background Backup", Task, (int)_TASK_CREATION.TASK_CREATE_OR_UPDATE, null, null, _TASK_LOGON_TYPE.TASK_LOGON_INTERACTIVE_TOKEN, ""); The line that is throwing the exception is this one ITimeTrigger Trigger = (ITimeTrigger)Task.Triggers.Create(_TASK_TRIGGER_TYPE2.TASK_TRIGGER_DAILY); The exception message kinda makes sense to me, but I'm afraid I don't know enough about COM to really know where to begin with this. Also, I should add that I'm using VS 2010 and I had to set the project to either be for x86 or x64 CPU's instead of the usual "Any CPU" because it kept giving me a BadImageFormatException. I doubt that's related to my current problem, but just in case I thought I might as well mention it.

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • Linux Kernel programming: trying to get vm_area_struct->vm_start crashes kernel

    - by confusedKid
    Hi, this is for an assignment at school, where I need to determine the size of the processes on the system using a system call. My code is as follows: ... struct task_struct *p; struct vm_area_struct *v; struct mm_struct *m; read_lock(&tasklist_lock); for_each_process(p) { printk("%ld\n", p->pid); m = p->mm; v = m->mmap; long start = v->vm_start; printk("vm_start is %ld\n", start); } read_unlock(&tasklist_lock); ... When I run a user level program that calls this system call, the output that I get is: 1 vm_start is 134512640 2 EIP: 0073:[<0806e352] CPU: 0 Not tainted ESP: 007b:0f7ecf04 EFLAGS: 00010246 Not tainted EAX: 00000000 EBX: 0fc587c0 ECX: 081fbb58 EDX: 00000000 ESI: bf88efe0 EDI: 0f482284 EBP: 0f7ecf10 DS: 007b ES: 007b 081f9bc0: [<08069ae8] show_regs+0xb4/0xb9 081f9bec: [<080587ac] segv+0x225/0x23d 081f9c8c: [<08058582] segv_handler+0x4f/0x54 081f9cac: [<08067453] sig_handler_common_skas+0xb7/0xd4 081f9cd4: [<08064748] sig_handler+0x34/0x44 081f9cec: [<080648b5] handle_signal+0x4c/0x7a 081f9d0c: [<08066227] hard_handler+0xf/0x14 081f9d1c: [<00776420] 0x776420 Kernel panic - not syncing: Kernel mode fault at addr 0x0, ip 0x806e352 EIP: 0073:[<400ea0f2] CPU: 0 Not tainted ESP: 007b:bf88ef9c EFLAGS: 00000246 Not tainted EAX: ffffffda EBX: 00000000 ECX: bf88efc8 EDX: 080483c8 ESI: 00000000 EDI: bf88efe0 EBP: bf88f038 DS: 007b ES: 007b 081f9b28: [<08069ae8] show_regs+0xb4/0xb9 081f9b54: [<08058a1a] panic_exit+0x25/0x3f 081f9b68: [<08084f54] notifier_call_chain+0x21/0x46 081f9b88: [<08084fef] __atomic_notifier_call_chain+0x17/0x19 081f9ba4: [<08085006] atomic_notifier_call_chain+0x15/0x17 081f9bc0: [<0807039a] panic+0x52/0xd8 081f9be0: [<080587ba] segv+0x233/0x23d 081f9c8c: [<08058582] segv_handler+0x4f/0x54 081f9cac: [<08067453] sig_handler_common_skas+0xb7/0xd4 081f9cd4: [<08064748] sig_handler+0x34/0x44 081f9cec: [<080648b5] handle_signal+0x4c/0x7a 081f9d0c: [<08066227] hard_handler+0xf/0x14 081f9d1c: [<00776420] 0x776420 The first process (pid = 1) gave me the vm_start without any problems, but when I try to access the second process, the kernel crashes. Can anyone tell me what's wrong, and maybe how to fix it as well? Thanks a lot! (sorry for the bad formatting....) edit: This is done in a Fedora 2.6 core in an uml environment.

    Read the article

  • Fast screen capture and lost Vsync

    - by user338759
    Hi, I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application. I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application. When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app. The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX) FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it. Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app. The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync. Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7) My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine? Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ? Any other suggestion, technology ? thank you

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • How do I test OpenCL on GPU when logged in remotely on Mac?

    - by Christopher Bruns
    My OpenCL program can find the GPU device when I am logged in at the console, but not when I am logged in remotely with ssh. Further, if I run the program as root in the ssh session, the program can find the GPU. The computer is a Snow Leopard Mac with a GeForce 9400 GPU. If I run the program (see below) from the console or as root, the output is as follows (notice the "GeForce 9400" line): 2 devices found Device #0 name = GeForce 9400 Device #1 name = Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz but if it is just me, over ssh, there is no GeForce 9400 entry: 1 devices found Device #0 name = Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz I would like to test my code on the GPU without having to be root. Is that possible? Simplified GPU finding program below: #include <stdio.h> #include <OpenCL/opencl.h> int main(int argc, char** argv) { char dname[500]; size_t namesize; cl_device_id devices[10]; cl_uint num_devices; int d; clGetDeviceIDs(0, CL_DEVICE_TYPE_ALL, 10, devices, &num_devices); printf("%d devices found\n", num_devices); for (d = 0; d < num_devices; ++d) { clGetDeviceInfo(devices[d], CL_DEVICE_NAME, 500, dname, &namesize); printf("Device #%d name = %s\n", d, dname); } return 0; } EDIT: I found essentially the same question being asked on nvidia's forums. Unfortunately, the only answer was of the form "this is the wrong forum".

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • Datastore performance, my code or the datastore latency

    - by fredrik
    I had for the last month a bit of a problem with a quite basic datastore query. It involves 2 db.Models with one referring to the other with a db.ReferenceProperty. The problem is that according to the admin logs the request takes about 2-4 seconds to complete. I strip it down to a bare form and a list to display the results. The put works fine, but the get accumulates (in my opinion) way to much cpu time. #The get look like this: outputData['items'] = {} labelsData = Label.all() for label in labelsData: labelItem = label.item.name if labelItem not in outputData['items']: outputData['items'][labelItem] = { 'item' : labelItem, 'labels' : [] } outputData['items'][labelItem]['labels'].append(label.text) path = os.path.join(os.path.dirname(__file__), 'index.html') self.response.out.write(template.render(path, outputData)) #And the models: class Item(db.Model): name = db.StringProperty() class Label(db.Model): text = db.StringProperty() lang = db.StringProperty() item = db.ReferenceProperty(Item) I've tried to make it a number of different way ie. instead of ReferenceProperty storing all Label keys in the Item Model as a db.ListProperty. My test data is just 10 rows in Item and 40 in Label. So my questions: Is it a fools errand to try to optimize this since the high cpu usage is due to the problems with the datastore or have I just screwed up somewhere in the code? ..fredrik

    Read the article

  • Submitting R jobs using PBS

    - by Tony
    I am submitting a job using qsub that runs parallelized R. My intention is to have R programme running on 4 different cores rather than 8 cores. Here are some of my settings in PBS file: #PBS -l nodes=1:ppn=4 .... time R --no-save < program1.R > program1.log I am issuing the command "ta job_id" and I'm seeing that 4 cores are listed. However, the job occupies a large amount of memory(31944900k used vs 32949628k total). If I were to use 8 cores, the jobs got hang due to memory limitation. top - 21:03:53 up 77 days, 11:54, 0 users, load average: 3.99, 3.75, 3.37 Tasks: 207 total, 5 running, 202 sleeping, 0 stopped, 0 zombie Cpu(s): 30.4%us, 1.6%sy, 0.0%ni, 66.8%id, 0.0%wa, 0.0%hi, 1.2%si, 0.0%st Mem: 32949628k total, 31944900k used, 1004728k free, 269812k buffers Swap: 2097136k total, 8360k used, 2088776k free, 6030856k cached Here is a snapshot when issuing command ta job_id PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1794 x 25 0 6247m 6.0g 1780 R 99.2 19.1 8:14.37 R 1795 x 25 0 6332m 6.1g 1780 R 99.2 19.4 8:14.37 R 1796 x 25 0 6242m 6.0g 1784 R 99.2 19.1 8:14.37 R 1797 x 25 0 6322m 6.1g 1780 R 99.2 19.4 8:14.33 R 1714 x 18 0 65932 1504 1248 S 0.0 0.0 0:00.00 bash 1761 x 18 0 63840 1244 1052 S 0.0 0.0 0:00.00 20016.hpc 1783 x 18 0 133m 7096 1128 S 0.0 0.0 0:00.00 python 1786 x 18 0 137m 46m 2688 S 0.0 0.1 0:02.06 R How can I prevent other users to use the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling. Could anyone kindly help me out on this? Can this be solved using pbs? Many Thanks

    Read the article

  • Problem: Vectorizing Code with Intel Visual FORTRAN for X64

    - by user313209
    I'm compiling my fortran90 code using Intel Visual FORTRAN on Windows Server 2003 Enterprise X64 Edition. When I compile the code for 32 bit structure and using automatic and manual vectorizing options. The code will be compiled, vectorized. And when I run it on 8 core system the compiled code uses 70% of CPU that shows me that vectorizing is working. But when I compile the code with 64 Bit compiler, it says that the code is vectorized but when I run it it only shows CPU usage of about 12% that is full usage for one core out of 8, so it means that while the compiler says that code is vectorized, vectorization is not working. And it's strange for me because it's on a X64 Edition Windows and I was expecting to see the reverse result. I thought that it should be better to run a code that is compiled for 64 Bit architecture on a 64 bit windows. Anyone have any idea why the compiled code is not able to use the full power of multiple cores for 64 Bit Compiled version? Thanks in advance for your responses.

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • What does flushing thread local memory to global memory mean?

    - by Jack Griffith
    Hi, I am aware that the purpose of volatile variables in Java is that writes to such variables are immediately visible to other threads. I am also aware that one of the effects of a synchronized block is to flush thread-local memory to global memory. I have never fully understood the references to 'thread-local' memory in this context. I understand that data which only exists on the stack is thread-local, but when talking about objects on the heap my understanding becomes hazy. I was hoping that to get comments on the following points: When executing on a machine with multiple processors, does flushing thread-local memory simply refer to the flushing of the CPU cache into RAM? When executing on a uniprocessor machine, does this mean anything at all? If it is possible for the heap to have the same variable at two different memory locations (each accessed by a different thread), under what circumstances would this arise? What implications does this have to garbage collection? How aggressively do VMs do this kind of thing? Overall, I think am trying to understand whether thread-local means memory that is physically accessible by only one CPU or if there is logical thread-local heap partitioning done by the VM? Any links to presentations or documentation would be immensely helpful. I have spent time researching this, and although I have found lots of nice literature, I haven't been able to satisfy my curiosity regarding the different situations & definitions of thread-local memory. Thanks very much.

    Read the article

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Java runs out of memory, even though I give it plenty!

    - by spitzanator
    Hey, folks. So, I'm running a java server (specifically Winstone: http://winstone.sourceforge.net/ ) Like this: java -server -Xmx12288M -jar /usr/share/java/winstone-0.9.10.jar --useSavedSessions=false --webappsDir=/var/servlets --commonLibFolder=/usr/share/java This has worked fine in the past, but now it needs to load a bunch more stuff into memory than it has before. The odd part is that, according to 'top', it has 15.0g of VIRT(ual memory) and it's RES(ident set) is 8.4g. Once it hits 8.4g, the CPU hangs at 100% (even though it's loading from disk), and eventually, I get Java's OutOfMemoryError. Presumably, the CPU hanging at 100% is Java doing garbage collection. So, my question is, what gives? I gave it 12 gigs of memory! And it's only using 8.2 gigs before it throws in the towel. What am I doing wrong? Oh, and I'm using java version "1.6.0_07" Java(TM) SE Runtime Environment (build 1.6.0_07-b06) Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode) on Linux. Thanks, Matt

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >