Search Results

Search found 10285 results on 412 pages for 'cpu architecture'.

Page 87/412 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • "STI", in the protected mode,CPU will restart.

    - by user299668
    INTEL X86 Platform. My programme run start at 2M absolute address in protected mode,everything seems ok, but when i enable interrupt with "sti", the CPU will restart. Why? is there any necessary initialization before "enbale interrupt"? i have setup the idtptr, but it seems no work.

    Read the article

  • Per-process CPU usage on Win95 / Win98 / WinME

    - by Hugh Allen
    How can you programmatically measure per-process (or better, per-thread) CPU usage under windows 95, windows 98 and windows ME? If it requires the DDK, where can you obtain that? Please note the Win9x requirement. It's easy on NT. EDIT: I tried installing the Win95/98 version of WMI, but Win32_Process.KernelModeTime and Win32_Process.UserModeTime return Null (as do most Win32_Process properties under win9x).

    Read the article

  • How to reduce cpu and ram usage?

    - by Hellboy
    i am going to "read" (video/big) files from server (shared environment) to clients (webbrowsers) via php and would like to know first if there is a way to reduce cpu and ram usage somehow (as i have those limited). thanks.

    Read the article

  • MSSQL Server high CPU and I/O activity database tuning

    - by zapping
    Our application tends to be running very slow recently. On debugging and tracing found out that the process is showing high cpu cycles and SQL Server shows high I/O activity. Can you please guide as to how it can be optimised? The application is now about an year old and the database file sizes are not very big or anything. The database is set to auto shrink. Its running on win2003, SQL Server 2005 and the application is a web application coded in c# i.e vs2005

    Read the article

  • How to create a CPU spike with a bash command

    - by User1
    I want to create a near 100% load on a Linux machine. It's quad core system and I want all cores going full speed. Ideally, the CPU load would last a designated amount of time and then stop. I'm hoping there's some trick in bash. I'm thinking some sort of infinite loop.

    Read the article

  • CPU ordering in Linux (with hyper threading)

    - by Jason
    I'm curious what the CPU ordering is in Linux. Say I bind a thread to cpu0 and another to cpu1 on a hyperthreaded system, are they both going to be on the same physical core. Given a Core i7 920 with 4 cores and hyperthreading, the output of /proc/cpuinfo has me thinking that cpu0 and cpu1 are different physical cores, and cpu0 and cpu4 are on the same physical core. Thanks.

    Read the article

  • 75 to 100% of CPU Usage in WPF?

    - by Khan
    Hi, Whenever application loads and any other usercontrol loads in the application, while loading and rendering the cpu usage touches 80 - 100%. How should i resolve this? Thanks and regards, Ershad

    Read the article

  • Does acos, atan functions in stl uses lots of cpu cycles

    - by jan
    Hi all, I wanted to calculate the angle between two vectors but I have seen these inverse trig operations such as acos and atan uses lots of cpu cycles. Is there a way where I can get this calculation done without using these functions? Also, does these really hit you when you in your optimization?

    Read the article

  • High accuracy cpu timers

    - by John Robertson
    An expert in highly optimized code once told me that an important part of his strategy was the availability of extremely high performance timers on the CPU. Does anyone know what those are and how one can access them to test various code optimizations? While I am interested regardless, I also wanted to ask whether it is possible to access them from something higher than assembly (or with only a little assembly) via visual studio C++?

    Read the article

  • Manually Increasing the Amount of CPU a Java Application Uses

    - by SkylineAddict
    I've just made a program with Eclipse that takes a really long time to execute. It's taking even longer because it's loading my CPU to 25% only (I'm assuming that is because I'm using a quad-core and the program is only using one core). Is there any way to make the program use all 4 cores to max it out? Java is supposed to be natively multi-threaded, so I don't understand why it would only use 25%.

    Read the article

  • Windows Game Loop 50% CPU on Dual Core

    - by Dave18
    The game loop alone is using 50% of CPU Usage, I haven't done any rendering work yet. What i'm doing here? while(true) { if(PeekMessage(&msg,NULL,0,0,PM_REMOVE)) { if(msg.message == WM_QUIT || msg.message == WM_CLOSE || msg.message == WM_DESTROY) break; TranslateMessage(&msg); DispatchMessage(&msg); } else { //Run game code, break out of loop when the game is over } }

    Read the article

  • Named pipe is using 100% CPU

    - by willwill
    I'm starting the script with ./file.py < pipe >> logfile and the script is: while True: try: I = raw_input().strip().split() except EOFError: continue doSomething() How could I better handle named pipe? This script always run at 100% CPU and it need to be real-time so I cannot use time.sleep.

    Read the article

  • Does anybody have practice in programming PCMEF - architectures?

    - by Erkki
    PCMEF is an architecture style presented in the book Practical Software Engineering by Maciaszek and Liong. The layers are: P: Presentation C: Controller M: Mediator E: Entity F: Foundation. It is some kind of enchancement compared with MVC - architecture. I recommend it to interactice, data and communicating - oriented purposes. I have programmed it using Visual Prolog. Foundation in my applications is the data model (domains) for the application. PCMEF is like a simulated computer: Presentation is the display, Controller the user interface and event handling, Mediator the internal logic and data highway. Entity is the database or external interfaces and F defines the knowledge. This is a really nice small architecture. Does any other have experiance of it?

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Mac 10.6 Universal Binary scipy: cephes/specfun "_aswfa_" symbol not found

    - by Markus
    Hi folks, I can't get scipy to function in 32 bit mode when compiled as a i386/x86_64 universal binary, and executed on my 64 bit 10.6.2 MacPro1,1. My python setup With the help of this answer, I built a 32/64 bit intel universal binary of python 2.6.4 with the intention of using the arch command to select between the architectures. (I managed to make some universal binaries of a few libraries I wanted using lipo.) That all works. I then installed scipy according to the instructions on hyperjeff's article, only with more up-to-date numpy (1.4.0) and skipping the bit about moving numpy aside briefly during the installation of scipy. Now, everything except scipy seems to be working as far as I can tell, and I can indeed select between 32 and 64 bit mode using arch -i386 python and arch -x86_64 python. The error Scipy complains in 32 bit mode: $ arch -x86_64 python -c "import scipy.interpolate; print 'success'" success $ arch -i386 python -c "import scipy.interpolate; print 'success'" Traceback (most recent call last): File "<string>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/__init__.py", line 7, in <module> from interpolate import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/interpolate/interpolate.py", line 13, in <module> import scipy.special as spec File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/__init__.py", line 8, in <module> from basic import * File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/basic.py", line 8, in <module> from _cephes import * ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so, 2): Symbol not found: _aswfa_ Referenced from: /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Expected in: flat namespace in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so Attempt at tracking down the problem It looks like scipy.interpolate imports something called _cephes, which looks for a symbol called _aswfa_ but can't find it in 32 bit mode. Browsing through scipy's source, I find an ASWFA subroutine in specfun.f. The only scipy product file with a similar name is specfun.so, but both that and _cephes.so appear to be universal binaries: $ cd /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/ $ file _cephes.so specfun.so _cephes.so: Mach-O universal binary with 2 architectures _cephes.so (for architecture i386): Mach-O bundle i386 _cephes.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 specfun.so: Mach-O universal binary with 2 architectures specfun.so (for architecture i386): Mach-O bundle i386 specfun.so (for architecture x86_64): Mach-O 64-bit bundle x86_64 Ho hum. I'm stuck. Things I may try but haven't figured out how yet include compiling specfun.so myself manually, somehow. I would imagine that scipy isn't broken for all 32 bit machines, so I guess something is wrong with the way I've installed it, but I can't figure out what. I don't really expect a full answer given my fairly unique (?) setup, but if anyone has any clues that might point me in the right direction, they'd be greatly appreciated. (edit) More details to address questions: I'm using gfortran (GNU Fortran from GCC 4.2.1 Apple Inc. build 5646). Python 2.6.4 was installed more-or-less like so: cd /tmp curl -O http://www.python.org/ftp/python/2.6.4/Python-2.6.4.tar.bz2 tar xf Python-2.6.4.tar.bz2 cd Python-2.6.4 # Now replace buggy pythonw.c file with one that supports the "arch" command: curl http://bugs.python.org/file14949/pythonw.c | sed s/2.7/2.6/ > Mac/Tools/pythonw.c ./configure --enable-framework=/Library/Frameworks --enable-universalsdk=/ --with-universal-archs=intel make -j4 sudo make frameworkinstall Scipy 0.7.1 was installed pretty much as described as here, but it boils down to a simple sudo python setup.py install. It would indeed appear that the symbol is undefined in the i386 architecture if you look at the _cephes library with nm, as suggested by David Cournapeau: $ nm -arch x86_64 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ 00000000000d4950 T _aswfa_ 000000000011e4b0 d _oblate_aswfa_data 000000000011e510 d _oblate_aswfa_nocv_data (snip) $ nm -arch i386 /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scipy/special/_cephes.so | grep _aswfa_ U _aswfa_ 0002e96c d _oblate_aswfa_data 0002e99c d _oblate_aswfa_nocv_data (snip) however, I can't yet explain its absence.

    Read the article

  • Drools flow architecture, Drools flow events with AND join nodes

    - by Shoukry K
    I have been evaluating a number of frameworks including jBPM and Drools flow for my application requirements. Lots of the opinions seem to be inclined towards Drools flow as its more flexible, knowledge oriented, easier to integrate with business rules, etc.. The application is some sort of an Email Campaign manager , where different customers can sign in, prepare (design) and launch email campaigns. The application should be able to do the following : 1- Receive a list of email addresses, send emails to each of these addresses starting from a certain date and during a certain time interval of the day , do some custom actions, and then wait for reply emails. 2- If a reply email is received and depending on the response text of the email , and depending on the time the email was received certain actions need to happen, web service calls need to take place, and error handling for these calls. 3- The application will manage and run many and different campaigns (different customers and different flows for each customer) at any point of time. The first question is : Is Drools flow the way to go about this? My main concerns are scalability, suspending, resuming flows, and long wait, and flows management. As you see from the requirements : There is a scheduling part : Certain flows need to be run at a certain point in time, they need to get suspended and then resumed. For example start sending emails starting on Dec 1st 2010 and send emails only in the time interval between 08:00 and 17:00 GMT. By then it might be that all subscribers have been sent emails, but it might not be the case, the process needs to (resume) on Dec 2nd and send a second batch, however certain (users) already received emails and they should be able to (continue at different stages of the flow) There are long wait states : Days or even weeks , i need to persist, suspend / resume and terminate flows (manage flows) External Events : This is where i got stuck first, i tried to put together a simple flow (see attached screenshot) See image http://img46.imageshack.us/img46/9620/workflowwithevents.png , there is a start node , connected to an action node, connected to a join. An event node is connected to a second action node, which is connected to the join. The join is an AND join , after the join there is an action and the end node. Here is the sample code i am using to launch the flow : KnowledgeBuilder builder = KnowledgeBuilderFactory .newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource("campaign.rf", CampaignsDroolsPoc.class), ResourceType.DRF); if (builder.hasErrors()) { KnowledgeBuilderErrors errors = builder.getErrors(); Iterator<KnowledgeBuilderError> iterator = errors.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next().toString()); } } KnowledgeBase base = KnowledgeBaseFactory.newKnowledgeBase(); base.addKnowledgePackages(builder.getKnowledgePackages()); final StatefulKnowledgeSession ksession = base .newStatefulKnowledgeSession(); // KnowledgeRuntimeLoggerFactory.newConsoleLogger(ksession); ksession.getWorkItemManager().registerWorkItemHandler("Log", new SendSMSWorkItemHandler()); ProcessInstance startProcess = ksession.startProcess("flow"); System.out.println("Signaling event"); startProcess.signalEvent("ev1", "ev1"); System.out.println("Signaled"); ksession.fireUntilHalt(); I am noticing that the event get triggered, the action node connected to the event gets triggered, however things seem to get stuck at the join. The flow does not continue past the AND join and the flow seems to get stuck. The action following the node does not get triggered. I also went through the drools flow documentation , and all the example codes, however i didn't find anything there. In addition any hints about the way to go about architecting the solution, and implementing it would be great.

    Read the article

  • Visual SourceSafe: Architecture/Management

    - by Nic
    I was looking for information on how other people with larger teams manage SourceSafe currently. I was looking for recommendations and advice for a new project I was setting up that will allow for a few key things Scalability Manage multiple overlapping releases Geared more around .NET however allows for legacy applications (VB, ASP and VBS) I am really looking for any lessons learned from other teams. I come from a StarTeam background and we used view labels and release labels to manage multiple overlapping projects. View labels geared more towards compiled code and SQL and the revision labels were used for VB/ASP projects. Thank you for any advice and sharing your experience and frustrations with other companies you might have worked with in the past.

    Read the article

  • How to represent "options" for my plugin architecture (C# .NET WinForms)

    - by Joshua
    Okay basically here's where I'm at. I have a list of PropertyDescriptor objects. These describe the custom "Options" fields on my Plugins, aka: public class MyPlugin : PluginAbstract, IPlugin { [PluginOption("This controls the color of blah blah blah")] [DefaultValue(Color.Red)] public Color TheColor { get; set; } [PluginOption("The number of blah blah blahs")] [DefaultValue(10)] public int BlahBlahBlahs { get; set; } } So I did all the hard parts: I have all the descriptions, default values, names and types of these custom "plugin options". MY QUESTION IS: When a user loads a plugin, how should I represent these options for them to config? On the back end I'll be using XML for the config, so that's not what I'm asking. I'm asking on the front end: What kind of WinForms control should I use to let users configure the options of a plugin, when there will be an unknown amount of options and different types used etc.?

    Read the article

  • PowerBuilder Plug-in Architecture

    - by Adam Hawkes
    PowerBuilder seems to have some support for plug-ins since version 10. However, I can't find any documentation nor tutorials about this. The only hints I can manage are by examining the COM objects inside the existing DLLs. It doesn't help much, but I'm a novice at COM development. A very cursory example of how to do something would be awesome.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >