Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 194/412 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Software Center seems to freeze system when installing, syslog has "blocked for more than 120 seconds" errors

    - by nbm
    12.04 (precise) 64-bit Kernel Linux 3.2.0-39 3.6GB memory Intel Core 2 Duo CPU @ 2.40GHz x2 WUBI-installed Ubuntu running on a MacBook Pro 7.1 with OSX running Vista via Boot Camp (hey, I like lots of OS's m'kay?) When installing from Ubuntu software center my system very frequently freezes. This has happened 4 of the last 5 installs. Most recently I was installing the Google Earth .deb from Google's website: clicking the .deb file automatically opens Software Center (otherwise I would have used Synaptic, as I've grown to expect Software Center to freeze my system and I'm rather tired of it.) By "freeze" I mean nothing works: no dash, no launcher, no mouse movement, no alt-tab, can't open terminal (keyboard does not work). Software center does show the "installing" icon but after that it greys out and I can't click anything. REISUB has no effect but a cold power-down and restart is possible. Occasionally, after 5-10 minutes, I'll be able to move the mouse / use the keyboard and run a launcher command or two, although other open apps (Chrome and Software Center) will still be greyed-out/frozen. (I've never waited longer than that - if still unresponsive after 15 minutes I just power down and restart.) Most recently, which is why I am finally posting a question, I waited about 15 minutes and was finally able to open System Monitor while this was going on. Processes tells me that System Monitor is using about 20% of CPU, and nothing else is using much (zeros mostly). In fact I didn't even see Software Center listed? However at this point the system finally partially unfroze, the installation completed, and while I wasn't about to close Software Center I was able to do a system shutdown and fresh restart and I went and took a look at the syslog. In /var/log/syslog I see a lot of ":blocked for more than 120 seconds" messages. Similar to ubuntu hang out with this message :blocked for more than 120 seconds Which has not been answered, and I'm not running a virtual machine. My full syslog with stack traces looks very, very similar to this: Why do tasks on Amazon Xen instance block for over 120 seconds causing server to hang? Note that that question was solved, but that's because the problem was being caused by Amazon and Amazon fixed the bug. I'm not running anything Amazon-related. My syslog does look very similar, however. My question is also similar to this: Troubleshooting server hang But the referenced "duplicate" in that question is about how to kill processes/restart when the system freezes. I know how to kill processes and restart. I want to figure out what is causing the problem so I can try to fix it. I realize that I could just use Synaptic instead of Ubuntu Software Center, but I'd like to try to solve the problem if possible. I'm thinking I should perhaps submit a bug report, but I wanted to first see if anyone else was having any similar problems, and if so what you all did to fix it. I see a number of questions about Software Center freezing and others, including those I linked, about the "blocked for more than 120 seconds" log error, but I didn't see any question that links the two. I did save a copy of the syslog report if anyone wants to see it, but as mentioned it's quite similar to the one posted in the Amazon-related question...and I didn't want to take up even more space unnecessarily as, my apologies - this question has already become extremely verbose!

    Read the article

  • ??????????? - Java SE Embedded 8

    - by kshimizu-Oracle
    Java?OS??????1?????????????????????????????????3?????????????? HEAP: Java????????????????????????????????? NON-HEAP: NON-HEAP????JVM???????????????????Code Cache?Metaspace???2????????????? Code Cache: ????JIT??????????????????????????? Metaspace: HEAP??????????????????????????   JavaVM??????????: VM?????????????????? ??????????????? ????????????????????????????????????????????????????????????????????????? HEAP?Java Mission Control???????????????????? (????)? ????Java SE?????????????API????????????????????????????????????? Mission Control?????API?????????????????????????????????API??????????????? HEAP???????????? VM????????"-Xmx"???????????????? java.lang.Runtime.maxMemory(); ?????HEAP????????? ?????VM????????"-Xms"? ????????????? "-Xms"???????"-Xmx"?????????? java.lang.Runtime.totalMemory(); ???????????HEAP????????????? java.lang.Runtime.freeMemory(); ??NON-HEAP???????????? API??????????? Java Mission Control?????????? ????????????Java Mission Control??????????????????????? ????"NON_HEAP"?????????NON-HEAP?????? ???? HEAP????NON-HEAP?????????????? Java VM???????????????????????????????????????? ?????????????????????????????????? ????HEAP/NON-HEAP?????????????????????????? OS?????????????? Linux???????procfs?Java??????????????????? (VmHWM or VmRSS) ????? ????HEAP/NON-HEAP??????????????????????????? ?????????????????? ??????JVM?????????????????? ?????????????????JVM???????????????????? ???JVM?????? ????????????? Embedded??JVM?????????? ??Embedded???Oracle JVM??????CPU????????????????????????????????????????? ??????CPU??????????????????????????????????????? Minimal/Client/Server??JVM???????????????? ????JVM??????????????????? ??????Compact????????????????? ? 2 - 3?????? Concept Guide (http://docs.oracle.com/javase/8/embedded/embedded-concepts/basic-concepts.htm) ???????? ??JVM??????????? ????????????????????? -Xms: ??????????? ?????????? ?????????????????????????????????????????????????? -Xmx: ??????????? -XX:ReservedCodeCacheSize: Code Cache??????? ?) JIT??????????????Code Cache????????????0???????? -Xint: JIT??????????? ????????????? JIT?????????????????????? ????????????????? -Xss: ???????????????????? ????????????????????????? ????????????????????????????? -XX:CompileThreshold: JIT?????????????????????????????????? ?????????????????????? ????????? ?????????????????? Code Cache?????????? ?????????? ????????????????????? ????????????????????????? ??????????????????????? ?????????????????????

    Read the article

  • Stateful vs. Stateless Webservices

    - by chrsk
    Imagine a more complex CRUD application which has a three-tier-architecture and communicates over webservices. The client starts a conversation to the server and doing some wizard like stuff. To process the wizard the client needs feedback given by the server. We started a discussion about stateful or stateless webservices for this approach. I made some research combined with my own experience, which points me to the question mentioned later. Stateless webservices having the following properties (in our case): + high scalability + high availability + high speed + rapid testing - bloated contract - implementing more logic on server-side But we can cross out the first two points, our application doesn't needs high scalability and availability. So we come to the stateful webservice. I've read a bunch of blogs and forum posts and the most invented point implementing a stateful webservice was: + simplifies contract (protocol) - bad testing - runs counter to the basic architecture of http But doesn't almost all web applications have these bad points? Web applications uses cookies, query strings, session ids, and all the stuff to avoid the statelessness of http. So why is it that bad for webservices?

    Read the article

  • Visual Studio 2010 64-bit COM Interop Issue

    - by Adam Driscoll
    I am trying to add a VC6 COM DLL to our VS2010RC C# solution. The DLL was compiled with the VC6 tools to create an x86 version and was compiled with the VC7 Cross-platform tools to generate a VC7 DLL. The x86 version of the assembly works fine as long as the consuming C# project's platform is set to x86. It doesn't matter whether the x64 or the x86 version of the DLL is actually registered. It works with both. If the platform is set to 'Any CPU' I receive a BadImageFormatException on the load of the Interop.<name>.dll. As for the x64 version, I cannot even get the project to build. I receive the tlbimp error: TlbImp : error TI0000: A single valid machine type compatible with the input type library must be specified. Has anyone seen this issue? EDIT: I've done a lot more digging into this issue and think this may be a Visual Studio bug. I have a clean solution. I bring in my COM assembly with language agnostic 'Any CPU' selected. The process architecture of the resulting Interop DLL is x86 rather than MSIL. May have to make the Interop by hand for now to get this to work. If anyone has another suggestion let me know.

    Read the article

  • Rhino Mocks, Dependency Injection, and Separation of Concerns

    - by whatispunk
    I am new to mocking and dependency injection and need some guidance. My application is using a typical N-Tier architecture where the BLL references the DAL, and the UI references the BLL but not the DAL. Pretty straight forward. Lets say, for example, I have the following classes: class MyDataAccess : IMyDataAccess {} class MyBusinessLogic {} Each exists in a separate assembly. I want to mock MyDataAccess in the tests for MyBusinessLogic. So I added a constructor to the MyBusinessLogic class to take an IMyDataAccess parameter for the dependency injection. But now when I try to create an instance of MyBusinessLogic on the UI layer it requires a reference to the DAL. I thought I could define a default constructor on MyBusinessLogic to set a default IMyDataAccess implementation, but not only does this seem like a codesmell it didn't actually solve the problem. I'd still have a public constructor with IMyDataAccess in the signature. So the UI layer still requires a reference to the DAL in order to compile. One possible solution I am toying with is to create an internal constructor for MyBusinessLogic with the IMyDataAccess parameter. Then I can use an Accessor from the test project to call the constructor. But there's still that smell. What is the common solution here. I must just be doing something wrong. How could I improve the architecture?

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • running asp.net 3.5 and asp.net 2.0 in same site

    - by cori
    We're running ASP.Net 2.0 on our corporate web site, and I'd like to get it up to ASP.Net 3.5 as smoothly as possible. The project/solution architecture in VS 2005 is an ASP.Net 2.0 web project and an .Net 2.0 data access layer project which is used by the site code. Upon opening the projects in a new VS 2008 solution they seemed to be converted to .Net 3.5 with a minimum of fuss - they built correctly out of the box, deployed successfully, and seem to work just fine, which is exactly as I would expect given that .Net 2.0 and 3.5 share a common runtime. The major difference after the conversion is that the web.config file's referenced dlls are now the 3.5 versions. What I would like to do is to update the site piecemeal; as I make modifications to a given page send the 3.5 verson of that page over to our webserver and not update the whole site at once. In testing on our dev box this approach seems to be working fine - the site code is interacting with the .Net 3.5 data access layer without difficulty, a handful of pages are running 3.5 page-behind code (by this I mean that they're running assemblies built in VS 2008 - the site is using single-page assemblies for code behind), the 3.5 web.config is in place, and the bulk of the site is running code-behind assemblies built in VS2005. Everything looks great. Which makes me worried that I'm missing something. Is this architecture workable, or is there a problem lying is wait for m that I haven't considered?

    Read the article

  • Cast exception when trying to create new Task Schedueler task.

    - by seaneshbaugh
    I'm attempting to create a new task in the Windows Task Scheduler in C#. What I've got so far is pretty much a copy and paste of http://bartdesmet.net/blogs/bart/archive/2008/02/23/calling-the-task-scheduler-in-windows-vista-and-windows-server-2008-from-managed-code.aspx Everything compiles just fine but come run time I get the following exception: Unable to cast COM object of type 'System.__ComObject' to interface type 'TaskScheduler.ITimeTrigger'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{B45747E0-EBA7-4276-9F29-85C5BB300006}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Here's all the code so you can see what I'm doing here without following the above link. TaskSchedulerClass Scheduler = new TaskSchedulerClass(); Scheduler.Connect(null, null, null, null); ITaskDefinition Task = Scheduler.NewTask(0); Task.RegistrationInfo.Author = "Test Task"; Task.RegistrationInfo.Description = "Just testing this out."; ITimeTrigger Trigger = (ITimeTrigger)Task.Triggers.Create(_TASK_TRIGGER_TYPE2.TASK_TRIGGER_DAILY); Trigger.Id = "TestTrigger"; Trigger.StartBoundary = "2010-05-12T06:15:00"; IShowMessageAction Action = (IShowMessageAction)Task.Actions.Create(_TASK_ACTION_TYPE.TASK_ACTION_SHOW_MESSAGE); Action.Id = "TestAction"; Action.Title = "Test Task"; Action.MessageBody = "This is a test."; ITaskFolder Root = Scheduler.GetFolder("\\"); IRegisteredTask RegisteredTask = Root.RegisterTaskDefinition("Background Backup", Task, (int)_TASK_CREATION.TASK_CREATE_OR_UPDATE, null, null, _TASK_LOGON_TYPE.TASK_LOGON_INTERACTIVE_TOKEN, ""); The line that is throwing the exception is this one ITimeTrigger Trigger = (ITimeTrigger)Task.Triggers.Create(_TASK_TRIGGER_TYPE2.TASK_TRIGGER_DAILY); The exception message kinda makes sense to me, but I'm afraid I don't know enough about COM to really know where to begin with this. Also, I should add that I'm using VS 2010 and I had to set the project to either be for x86 or x64 CPU's instead of the usual "Any CPU" because it kept giving me a BadImageFormatException. I doubt that's related to my current problem, but just in case I thought I might as well mention it.

    Read the article

  • Install CLSQL on Mac OS X

    - by Ken
    I have SBCL installed (via macports/darwinports) on my Intel Core 2 Duo Macbook running 10.5.8. I've installed several libraries like this: (require 'asdf) (require 'asdf-install) (asdf-install:install 'cl-who) But when I tried to install CLSQL this way ('clsql) after it downloaded, I got this: ... ; registering #<SYSTEM CLSQL-UFFI {123D9E01}> as CLSQL-UFFI ; $ cd /Users/ken/.sbcl/site/clsql-5.0.5/uffi/; make cc -arch x86_64 -arch i386 -bundle /usr/lib/bundle1.o -flat_namespace -undefined suppress clsql_uffi.c -o clsql_uffi.dylib ld: duplicate symbol dyld_stub_binding_helper in /usr/lib/bundle1.o and /usr/lib/bundle1.o for architecture i386 ld: duplicate symbol dyld_stub_binding_helper in /usr/lib/bundle1.o and /usr/lib/bundle1.o for architecture x86_64 collect2: ld returned 1 exit status collect2: ld returned 1 exit status lipo: can't open input file: /var/folders/Nf/Nf4o5ArDFaWBH2OwtnWM3E+++TQ/-Tmp-//ccJyZxou.out (No such file or directory) make: *** [clsql_uffi.so] Error 1 Is there something I forgot, or some trick to get it to build on Mac OS X? I know very little about C libraries on the Mac these days, so I don't even know where to start on this. Thanks!

    Read the article

  • ASP.NET MVC 1 and 2 on Mono 2.4 with Fluent NHibernate

    - by SztupY
    Hi! I'd like to create an application using ASP.NET MVC, that should run under mono 2.4 (compiling will be done on a Windows box). Has anyone getting luck with this? Here is what I've already tried: ASP.NET MVC on mono without any persistence model support, and using nhaml as the view engine S#aml architecture, which is a quite good framework imho, but it depends too much on stuff, that are not working good under mono (like windsor) The first part worked fine, I didn't encounter any major problems. But I couldn't get the second part working. It seems it's dependency on Castle.Windsor breaks the whole mono support (but there might be other parts too). Therefore I decided to create an alternative framework, that borrows some of the ideas of s#arp-architecture, but designed to be working under mono (and if I'm able to do this I'll release it for the community of course). The controller and view part is working fine (not much magic here though, they have been always working), but I have some questions before I start job on the persistence part: What NHibernate versions are working under mono? I've heard 1.2 is working fine. Does 2.0.1/2.1 beta work under mono? Does Fluent.NHibernate and NHibernate.Linq work under mono? (for the latter it seems it needs some dependcies that aren't avaialable in mono) Are there any good alternatives for persistence support to NHibernate under mono? Alternative questions: Are there any frameworks that have mono+persistence+asp.net mvc support already or am I the first one to think about this? If you have already done this: what are your opinions on stability/usability? Thanks for the answers EDIT: Updated the framework to support ASP.NET MVC 2: http://shaml.sztupy.hu/

    Read the article

  • M2Crypto doesn't install in venv, or swig doesn't define __x86_64__ which breaks compiling against OpenSSL

    - by Lorin Hochstein
    I'm trying to install the Python M2Crypto package into a virtualenv on an x86_64 RHEL 6.1 machine. This process invokes swig, which fails with the following error: $ virtualenv -q --no-site-packages venv $ pip install -E venv M2Crypto==0.20.2 Downloading/unpacking M2Crypto==0.20.2 Downloading M2Crypto-0.20.2.tar.gz (412Kb): 412Kb downloaded Running setup.py egg_info for package M2Crypto Installing collected packages: M2Crypto Running setup.py install for M2Crypto building 'M2Crypto.__m2crypto' extension swigging SWIG/_m2crypto.i to SWIG/_m2crypto_wrap.c swig -python -I/usr/include/python2.6 -I/usr/include -includeall -o SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i /usr/include/openssl/opensslconf.h:31: Error: CPP #error ""This openssl-devel package does not work your architecture?"". Use the -cpperraswarn option to continue swig processing. error: command 'swig' failed with exit status 1 Complete output from command /home/lorin/venv/bin/python -c "import setuptools;__file__='/home/lorin/venv/build/M2Crypto/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-BFiNtU-record/install-record.txt --install-headers /home/lorin/venv/include/site/python2.6: I've got OpenSSL 1.0.0 installed via RPM packages from RedHat. The part of /usr/include/openssl/opensslconf.h that causes the error looks like this: #if defined(__i386__) #include "opensslconf-i386.h" #elif defined(__ia64__) #include "opensslconf-ia64.h" #elif defined(__powerpc64__) #include "opensslconf-ppc64.h" #elif defined(__powerpc__) #include "opensslconf-ppc.h" #elif defined(__s390x__) #include "opensslconf-s390x.h" #elif defined(__s390__) #include "opensslconf-s390.h" #elif defined(__sparc__) && defined(__arch64__) #include "opensslconf-sparc64.h" #elif defined(__sparc__) #include "opensslconf-sparc.h" #elif defined(__x86_64__) #include "opensslconf-x86_64.h" #else #error "This openssl-devel package does not work your architecture?" #endif gcc has the right variable defined: $ echo | gcc -E -dM - | grep x86_64 #define __x86_64 1 #define __x86_64__ 1 But apparenty swig doesn't, since this is the line that's failing: swig -python -I/usr/include/python2.6 -I/usr/include -includeall -o \ SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i Is there a way to fix this by changing something in my system configuration? M2Crypto gets installed in a virtualenv as part of a larger script I don't control, so avoiding mucking around with the M2Crypto files would be a good thing.

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • Linux Kernel programming: trying to get vm_area_struct->vm_start crashes kernel

    - by confusedKid
    Hi, this is for an assignment at school, where I need to determine the size of the processes on the system using a system call. My code is as follows: ... struct task_struct *p; struct vm_area_struct *v; struct mm_struct *m; read_lock(&tasklist_lock); for_each_process(p) { printk("%ld\n", p->pid); m = p->mm; v = m->mmap; long start = v->vm_start; printk("vm_start is %ld\n", start); } read_unlock(&tasklist_lock); ... When I run a user level program that calls this system call, the output that I get is: 1 vm_start is 134512640 2 EIP: 0073:[<0806e352] CPU: 0 Not tainted ESP: 007b:0f7ecf04 EFLAGS: 00010246 Not tainted EAX: 00000000 EBX: 0fc587c0 ECX: 081fbb58 EDX: 00000000 ESI: bf88efe0 EDI: 0f482284 EBP: 0f7ecf10 DS: 007b ES: 007b 081f9bc0: [<08069ae8] show_regs+0xb4/0xb9 081f9bec: [<080587ac] segv+0x225/0x23d 081f9c8c: [<08058582] segv_handler+0x4f/0x54 081f9cac: [<08067453] sig_handler_common_skas+0xb7/0xd4 081f9cd4: [<08064748] sig_handler+0x34/0x44 081f9cec: [<080648b5] handle_signal+0x4c/0x7a 081f9d0c: [<08066227] hard_handler+0xf/0x14 081f9d1c: [<00776420] 0x776420 Kernel panic - not syncing: Kernel mode fault at addr 0x0, ip 0x806e352 EIP: 0073:[<400ea0f2] CPU: 0 Not tainted ESP: 007b:bf88ef9c EFLAGS: 00000246 Not tainted EAX: ffffffda EBX: 00000000 ECX: bf88efc8 EDX: 080483c8 ESI: 00000000 EDI: bf88efe0 EBP: bf88f038 DS: 007b ES: 007b 081f9b28: [<08069ae8] show_regs+0xb4/0xb9 081f9b54: [<08058a1a] panic_exit+0x25/0x3f 081f9b68: [<08084f54] notifier_call_chain+0x21/0x46 081f9b88: [<08084fef] __atomic_notifier_call_chain+0x17/0x19 081f9ba4: [<08085006] atomic_notifier_call_chain+0x15/0x17 081f9bc0: [<0807039a] panic+0x52/0xd8 081f9be0: [<080587ba] segv+0x233/0x23d 081f9c8c: [<08058582] segv_handler+0x4f/0x54 081f9cac: [<08067453] sig_handler_common_skas+0xb7/0xd4 081f9cd4: [<08064748] sig_handler+0x34/0x44 081f9cec: [<080648b5] handle_signal+0x4c/0x7a 081f9d0c: [<08066227] hard_handler+0xf/0x14 081f9d1c: [<00776420] 0x776420 The first process (pid = 1) gave me the vm_start without any problems, but when I try to access the second process, the kernel crashes. Can anyone tell me what's wrong, and maybe how to fix it as well? Thanks a lot! (sorry for the bad formatting....) edit: This is done in a Fedora 2.6 core in an uml environment.

    Read the article

  • Fast screen capture and lost Vsync

    - by user338759
    Hi, I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application. I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application. When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app. The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX) FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it. Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app. The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync. Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7) My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine? Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ? Any other suggestion, technology ? thank you

    Read the article

  • How to setup matlabpool for multiple processors?

    - by JohnIdol
    I just setup a Extra Large Heavy Computation EC2 instance to throw it at my Genetic Algorithms problem, hoping to speed up things. This instance has 8 Intel Xeon processors (around 2.4Ghz each) and 7 Gigs of RAM. On my machine I have an Intel Core Duo, and matlab is able to work with my two cores just fine by runinng: matlabpool open 2 On the EC2 instance though, matlab only is capable of detecting 1 out of 8 processors, and if I try running: matlabpool open 8 I get an error saying that the ClusterSize is 1 since there's only 1 core on my CPU. True, there is only 1 core on each CPU, but I have 8 CPUs on the given EC2 instance! So the difference from my machine and the ec2 instance is that I have my 2 cores on a single processor locally, while the EC2 instance has 8 distinct processors. My question is, how do I get matlab to work with those 8 processors? I found this paper, but it seems related to setting up matlab with multiple EC2 instances (not related to multiple processors on the same instance, EC2 or not), which is not my problem. Any help appreciated!

    Read the article

  • Odd difference between Python 2.5 and Python 2.6 on MacOS 10.6 using ctypes and libproc proc_pidinfo

    - by cemasoniv
    I'm trying to determine the current working directory of a process given its PID. The command-line utility lsof does something similar. Here's the source to the python script: import ctypes from ctypes import util import sys PROC_PIDVNODEPATHINFO = 9 proc = ctypes.cdll.LoadLibrary(util.find_library("libproc")) print(proc.proc_pidinfo) class vnode_info(ctypes.Structure): _fields_ = [('data', ctypes.c_ubyte * 152)] class vnode_info_path(ctypes.Structure): _fields_ = [('vip_vi', vnode_info), ('vip_path', ctypes.c_char * 1024)] class proc_vnodepathinfo(ctypes.Structure): _fields_ = [('pvi_cdir', vnode_info_path), ('pvi_rdir', vnode_info_path)] inst = proc_vnodepathinfo() pid = int(sys.argv[1]) ret = proc.proc_pidinfo( pid, PROC_PIDVNODEPATHINFO, 0, ctypes.byref(inst), ctypes.sizeof(inst) ) print(ret, inst.pvi_cdir.vip_path) However, even though this script behaves as expected on Python 2.6, it does not work in Python 2.5: host:dir user$ sudo /usr/bin/python2.6 script.py 2698 <_FuncPtr object at 0x100419ae0> (2352, '/') host:dir user$ sudo /usr/bin/python2.5 script.py 2698 <_FuncPtr object at 0x19fdc0> (0, '') (PID 2698 is "Activity Monitor.app"). Note the different return values. Since this program strongly based on ctypes, I can't imagine any difference in Python itself that would cause this. The same behavior (as Python 2.5) occurs with my self-built Python 3.2. I'm not sure what versioning information I can give to help track down the weirdness -- or even come up with a solution for 2.5 -- but here's some stuff: host:dir user$ otool -L /usr/bin/python2.6 /usr/bin/python2.6: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ otool -L /usr/bin/python2.5 /usr/bin/python2.5 (architecture i386): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) /usr/bin/python2.5 (architecture ppc7400): /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.0) host:dir user$ uname -a Darwin host.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 Thanks to anyone that has a clue about what's going on here:)

    Read the article

  • How do I test OpenCL on GPU when logged in remotely on Mac?

    - by Christopher Bruns
    My OpenCL program can find the GPU device when I am logged in at the console, but not when I am logged in remotely with ssh. Further, if I run the program as root in the ssh session, the program can find the GPU. The computer is a Snow Leopard Mac with a GeForce 9400 GPU. If I run the program (see below) from the console or as root, the output is as follows (notice the "GeForce 9400" line): 2 devices found Device #0 name = GeForce 9400 Device #1 name = Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz but if it is just me, over ssh, there is no GeForce 9400 entry: 1 devices found Device #0 name = Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz I would like to test my code on the GPU without having to be root. Is that possible? Simplified GPU finding program below: #include <stdio.h> #include <OpenCL/opencl.h> int main(int argc, char** argv) { char dname[500]; size_t namesize; cl_device_id devices[10]; cl_uint num_devices; int d; clGetDeviceIDs(0, CL_DEVICE_TYPE_ALL, 10, devices, &num_devices); printf("%d devices found\n", num_devices); for (d = 0; d < num_devices; ++d) { clGetDeviceInfo(devices[d], CL_DEVICE_NAME, 500, dname, &namesize); printf("Device #%d name = %s\n", d, dname); } return 0; } EDIT: I found essentially the same question being asked on nvidia's forums. Unfortunately, the only answer was of the form "this is the wrong forum".

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • WCF Certificates without Certificate Store

    - by Kane
    My team is developing a number of WPF plug-ins for a 3rd party thick client application. The WPF plug-ins use WCF to consume web services published by a number of TIBCO services. The thick client application maintains a separate central data store and uses a proprietary API to access the data store. The thick client and WPF plug-ins are due to be deployed onto 10,000 workstations. Our customer wants to keep the certificate used by the thick client in the central data store so that they don't need to worry about re-issuing the certificate (current re-issue cycle takes about 3 months) and also have the opportunity to authorise the use of the certificate. The proposed architecture offers a form of shared secret / authentication between the central data store and the TIBCO services. Whilst I don’t necessarily agree with the proposed architecture our team is not able to change it and must work with what’s been provided. Basically our client wants us to build into our WPF plug-ins a mechanism which retrieves the certificate from the central data store (which will be allowed or denied based on roles in that data store) into memory then use the certificate for creating the SSL connection to the TIBCO services. No use of the local machine's certificate store is allowed and the in memory version is to be discarded at the end of each session. So the question is does anyone know if it is possible to pass an in-memory certificate to a WCF (.NET 3.5) service for SSL transport level encryption? Note: I had asked a similar question (here) but have since deleted it and re-asked it with more information.

    Read the article

  • AStar in a specific case in C#

    - by KiTe
    Hello. To an intership, I have use the A* algorithm in the following case : the unit shape is a square of height and width of 1, we can travel from a zone represented by a rectangle from another, but we can't travel outside these predifined areas, we can go from a rectangle to another through a door, represented by a segment on corresponding square edge. Here are the 2 things I already did but which didn't satisfied my boss : 1 : I created the following classes : -a Door class which contains the location of the 2 separated squares and the door's orientation (top, left, bottom, right), -a Map class which contains a door list, a rectangle list representing the walkable areas and a 2D array representing the ground's squares (for additionnal infomations through an enumeration) - classes for the A* algorithm (node, AStar) 2 : -a MapCase class, which contains information about the case effect and doors through an enumeration (with [FLAGS] attribute set on, to be able to cummulate several information on each case) -a Map classes which only contains a 2D array of MapCase classes - the classes for the A* algorithm (still node an AStar). Since the 2 version is better than the first (less useless calculation, better map classes architecture), my boss is not still satisfied about my mapping classes architecture. The A* and node classes are good and easily mainainable, so I don't think I have to explain them deeper for now. So here is my asking : has somebody a good idea to implement the A* with the problem specification (rectangle walkable but with a square unit area, travelling through doors)? He said that a grid vision of the problem (so a 2D array) shouldn't be the correct way to solve the problem. I wish I've been clear while exposing my problem .. Thanks KiTe

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • Best practices on using URIs as parameter value in REST calls.

    - by dafmetal
    I am designing a REST API where some resources can be filtered through query parameters. In some cases, these filter values would be resources from the same REST API. This makes for longish and pretty unreadable URIs. While this is not too much of a problem in itself because the URIs are meant to be created and manipulated programmatically, it makes for some painful debugging. I was thinking of allowing shortcuts to URIs used as filter values and I wonder if this is allowed according to the REST architecture and if there are any best practices. For example: I have a resource that gets me Java classes. Then the following request would give me all Java classes: GET http://example.org/api/v1/class Suppose I want all subclasses of the Collection Java class, then I would use the following request: GET http://example.org/api/v1/class?has-supertype=http://example.org/api/v1/class/collection That request would return me Vector, ArrayList and all other subclasses of the Collection Java class. That URI is quite long though. I could already shorten it by allowing hs as an alias for has-supertype. This would give me: GET http://example.org/api/v1/class?hs=http://example.org/api/v1/class/collection Another way to allow shorter URIs would be to allow aliases for URI prefixes. For example, I could define class as an alias for the URI prefix http://example.org/api/v1/class/. Which would give me the following possibility: GET http://example.org/api/v1/class?hs=class:collection Another possibility would be to remove the class alias entirely and always prefix the parameter value with http://example.org/api/v1/class/ as this is the only thing I would support. This would turn the request for all subtypes of Collection into: GET http://example.org/api/v1/class?hs=collection Do these "simplifications" of the original request URI still conform to the principles of a REST architecture? Or did I just go off the deep end?

    Read the article

  • Moving a ASP.NET application to the cloud

    - by user102533
    I am new to cloud computing, so please bear with me here. I have an existing ASP.NET application with SQL Server 2008 hosted on a Virtual Private Server. Here's what it briefly does: The front end accepts user's requests and adds them to a DB table A Windows Service running in the background picks up the request, processes it and sets a flag. The Windows Services also creates a file for the user to download. User downloads file I'd like to move this web application with the service to the cloud. The architecture I envision is that I'll have 1 Web server in which I will install the front end and the windows service. I'll also have a cloud files server for file storage. The windows service should somehow create a file and transfer it to the cloud file server (I assume this is possible?) My questions: Does the architecture look like I am going in the right direction? I know Amazon has been providing cloud services for a long time. If I want to do minimal changes to my application, should I go with Amazon, Rackspace, Azure or some other provider? I understand that I would not only pay for file storage and web server but also for the bandwidth of users downloading the file and the windows servic uploading the file to the cloud server. Can I assume these costs are negligible? Should I go with VPS + Cloud Files combination to begin with? Any other thoughts/suggestions?

    Read the article

  • Triggering PHP from ActiveMQ

    - by scompt.com
    Background: Our current system involves two services (one written in Java, the other in PHP) that communicate with each other using HTTP callbacks. We would like to migrate from HTTP callbacks to a message-based architecture using ActiveMQ (or another, if necessary). We'll probably use STOMP to communicate between them. Eventually, the PHP service will be rewritten in Java, but that's not part of this project. Question: How can the ActiveMQ system notify PHP that a new message has been posted to the queue that the PHP system is subscribed to? In the current system, the callback inherently calls into the PHP and triggers it. This goes away with a message-based architecture. Possible solutions: Cron regularly calls a PHP script that checks for new messages. yuck. A long-running PHP process that loops and sleeps and checks for new messages. less yuck? ActiveMQ calls a PHP script when a new message is posted. good, how? ??

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >