Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 91/387 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • How can CopyOnWriteArrayList be thread-safe?

    - by Shooshpanchick
    I've taken a look into OpenJDK's sources of CopyOnWriteArrayList and it seems that all write operations are protected by the same lock and read operations are not protected at all. As I understand, under JMM all accesses to a variable (both read and write) should be protected by lock or reordering effects may occur. For example, set(int, E) method contains these lines (under lock): /* 1 */ int len = elements.length; /* 2 */ Object[] newElements = Arrays.copyOf(elements, len); /* 3 */ newElements[index] = element; /* 4 */ setArray(newElements); The get(int) method, on the other hand, only does return get(getArray(), index);. In my understanding of JMM, this means that get may observe the array in an inconsistent state if statements 1-4 are reordered like 1-2(new)-4-2(copyOf)-3. Do I understand JMM incorrectly or is there any other explanations on why CopyOnWriteArrayList is thread-safe?

    Read the article

  • How do I do import hooks in IronPython/Silverlight?

    - by ahlatimer
    I'm extending TryPython to (along with various other things) allow users to save a file and subsequently import that file. TryPython overloads the built in file operations, so I need to know what parts of import need to hooked into in order for import to use the overloaded file operations. Really, a basic overview of IronPython's import when used in Silverlight would be extremely helpful. I don't need a complete working solution (although I won't stop you from writing one! :). I'm a Python newbie, and I really have no idea where to even begin. Thanks!

    Read the article

  • Phantom activity on MySQL

    - by LoveMeSomeCode
    This is probably just my total lack of MySQL expertise, but is it typical to see lots of phantom activity on a MySQL instance via phpMyAdmin? I have a shared hosting plan through Lithium, and when I log in through the phpMyAdmin console and click on the 'Status' tab, it's showing crazy high numbers for queries. Within an hour of activating my account I had 1 million queries. At first I thought this was them setting things up, but the number is climbing constantly, averaging 170/second. I've got a support ticket in with Lithium, but I thought I'd ask here if this were a MySQL/shared host thing, because I had the same thing happen with a shared hosting plan through Joyent.

    Read the article

  • Use multiple inheritance to discriminate useage roles?

    - by Arne
    Hi fellows, it's my flight simulation application again. I am leaving the mere prototyping phase now and start fleshing out the software design now. At least I try.. Each of the aircraft in the simulation have got a flight plan associated to them, the exact nature of which is of no interest for this question. Sufficient to say that the operator way edit the flight plan while the simulation is running. The aircraft model most of the time only needs to read-acess the flight plan object which at first thought calls for simply passing a const reference. But ocassionally the aircraft will need to call AdvanceActiveWayPoint() to indicate a way point has been reached. This will affect the Iterator returned by function ActiveWayPoint(). This implies that the aircraft model indeed needs a non-const reference which in turn would also expose functions like AppendWayPoint() to the aircraft model. I would like to avoid this because I would like to enforce the useage rule described above at compile time. Note that class WayPointIter is equivalent to a STL const iterator, that is the way point can not be mutated by the iterator. class FlightPlan { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) WayPointIter First() const; WayPointIter Last() const; WayPointIter Active() const; void AdvanceActiveWayPoint() const; (...) }; My idea to overcome the issue is this: define an abstract interface class for each usage role and inherit FlightPlan from both. Each user then only gets passed a reference of the appropriate useage role. class IFlightPlanActiveWayPoint { public: WayPointIter Active() const =0; void AdvanceActiveWayPoint() const =0; }; class IFlightPlanEditable { public: void AppendWayPoint(const WayPointIter& at, WayPoint new_wp); void ReplaceWayPoint(const WayPointIter& ar, WayPoint new_wp); void RemoveWayPoint(WayPointIter at); (...) }; Thus the declaration of FlightPlan would only need to be changed to: class FlightPlan : public IFlightPlanActiveWayPoint, IFlightPlanEditable { (...) }; What do you think? Are there any cavecats I might be missing? Is this design clear or should I come up with somethink different for the sake of clarity? Alternatively I could also define a special ActiveWayPoint class which would contain the function AdvanceActiveWayPoint() but feel that this might be unnecessary. Thanks in advance!

    Read the article

  • Android: How to set the contents of a Edit text from a Button click?

    - by primal
    Hi, I am a rookie to android. I am thinking of implementing a simple calculator in android to get a hold of the basics in android. I want to display a keypad with numbers and mathematical operations and when the user presses the keys the corresponding number is displayed in edit text. I tried using gettext() and updating the contents of edit text but it shows just the contents of pressed button. Also how do I read the contents of button so as to do mathematical operations in code? Any help would be much appreciated. regards, Primal

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • Adding more OR searches with CONTAINS Brings Query to Crawl

    - by scolja
    I have a simple query that relies on two full-text indexed tables, but it runs extremely slow when I have the CONTAINS combined with any additional OR search. As seen in the execution plan, the two full text searches crush the performance. If I query with just 1 of the CONTAINS, or neither, the query is sub-second, but the moment you add OR into the mix the query becomes ill-fated. The two tables are nothing special, they're not overly wide (42 cols in one, 21 in the other; maybe 10 cols are FT indexed in each) or even contain very many records (36k recs in the biggest of the two). I was able to solve the performance by splitting the two CONTAINS searches into their own SELECT queries and then UNION the three together. Is this UNION workaround my only hope? Thanks. SELECT a.CollectionID FROM collections a INNER JOIN determinations b ON a.CollectionID = b.CollectionID WHERE a.CollrTeam_Text LIKE '%fa%' OR CONTAINS(a.*, '"*fa*"') OR CONTAINS(b.*, '"*fa*"') Execution Plan (guess I need more reputation before I can post the image):

    Read the article

  • How to hide and disable cursor globally?

    - by trudger
    I have two questions: How to hide cursor for all programs? I tried to hide the cursor by using ShowCursor, but it only works in my program. The cursor still appears when moving cursor outside of my program. How to disable mouse operations for all programs? I use SetWindowsHookEx to hook mouse and prevent other programs to processing the mouse operations. I can hook the clicks, but the problem is that I can't hook the "move". When I move the mouse to menu or system buttons ("minimize/restore/close"), they are highlighted. This means they can still "see" the mouse. Can anyone help me please?

    Read the article

  • How can I write faster JavaScript?

    - by a paid nerd
    I'm writing an HTML5 canvas visualization. According to the Chrome Developer Tools profiler, 90% of the work is being done in (program), which I assume is the V8 interpreter at work calling functions and switching contexts and whatnot. Other than logic optimizations (e.g., only redrawing parts of the visualization that have changed), what can I do to optimize the CPU usage of my JavaScript? I'm willing to sacrifice some amount of readability and extensibility for performance. Is there a big list I'm missing because my Google skills suck? I have some ideas but I'm not sure if they're worth it: Limit function calls When possible, use arrays instead of objects and properties Use variables for math operation results as much as possible Cache common math operations such as Math.PI / 180 Use sin and cos approximation functions instead of Math.sin() and Math.cos() Reuse objects when passing around data instead of creating new ones Replace Math.abs() with ~~ Study jsperf.com until my eyes bleed Use a preprocessor on my JavaScript to do some of the above operations

    Read the article

  • How to configure a specific service operation to be accessible through a different End Point

    - by pradeeptp
    I have single service contract that has 2 service operations. Let me call these operations as X1 and X2. How do I configure X1 to be accessible through HTTP and X2 to be accessible through TCP/IP. If I configure the service contract to be accessibel to TCP/IP end point then both X1 and X2 will be accessible through TCP/IP. Same is the case if I configure the same service contract with HTTP protocol. I could have two different service contracts for achieving what I want, but I want to know if I could achieve the same through a single service contract.

    Read the article

  • Sharepoint as a replacement for N-Tiers Applications and OLTP Databases

    - by user264892
    All, At my current company, we are looking to replace all ASP.NET Applications and OLTP databases with Sharepoint 2007. Our applications and databases deal with 10,000+ rows, and we have 5,000 + clients actively using the system. Our Implementation of sharepoint would replace all n-tier applications. Does anyone have an experience in implementing this? My current viewpoint is that Sharepoint is not built for or adequate enough to handle this type of application. Can it really replace application with hundreds of pages, and hundreds of tables? Support Data warehousing operations? Support high performance OLTP operations? Provide a robust development environment? Any and all input is greatly appreciated. Thanks S.O. Community.

    Read the article

  • Dump Hibernate activity to sql script file

    - by zeven
    Hi, I'm trying to log hibernate activity (only dml operations) to an sql script file. My goal is to have a way to reconstruct the database from a given starting point to the current state by executing the generated script. I can get the sql queries from log4j logs but they have more information than the raw sql queries and i would need to parse them and extract only the helpful statements. So i'm looking for a programatic way, maybe by listening the persist/merge/delete operations and accessing the hibernate-generated sql statements. I don't like to reinvent the wheel so, if anybody know a way for doing this i would appreciate it very much. Thanks in advance

    Read the article

  • WorkFlow and WCF dynamically launching WorkFlows

    - by Raj73
    I have a WF which will be hosted on WCF . The service Contract will contain a single operation containing two parameters. Parameter1 will be a string and will contain the name of the workflow to invoke and parameter two will contain the input for the invoked Work Flow. All operations will take the same parameter. All the operations will return the same return value. I have created the service implementation and I would like to depending on the value of parameter1 start executing the appropriate workflow and return the value (There can be number of workflow classes say Operation1, Operation2...which will be the passed in as the value in Parameter1). How can I instantiate different workflow classes and pass parameters and get the return values from them which I should then pass back to the calling Client. (Also Should I be using ReceiveActivities in all of my Launchable WorkFlow Classes ? ) Any code samples or pointers would help

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • Dynamically changing databases in SQL Server 2000

    - by spuppett
    At work we have a number of databases that we need to do the same operations on. I would like to write 1 SP that would loop over operations and set the database at the beginning of the loop (example to follow). I've tried sp_executesql('USE ' + @db_id) but that only sets the DB for the scope of that stored procedure. I don't really want to loop with hard coded database names because we need to do similar things in many different places and it's tough to remember where things need to change if we add another DB. Any thoughts Example: DECLARE zdb_loop CURSOR FAST_FORWARD FOR SELECT distinct db_id from DBS order by db_id OPEN zdb_loop FETCH NEXT FROM zdb_loop INTO @db_id WHILE @@FETCH_STATUS = 0 BEGIN USE @db_id --Do stuff against 3 or 4 different DBs FETCH NEXT FROM zdb_loop INTO @db_id END CLOSE zdb_loop DEALLOCATE zdb_loop

    Read the article

  • Getting a query to index seek (rather than scan)

    - by PaulB
    Running the following query (SQL Server 2000) the execution plan shows that it used an index seek and Profiler shows it's doing 71 reads with a duration of 0. select top 1 id from table where name = '0010000546163' order by id desc Contrast that with the following with uses an index scan with 8500 reads and a duration of about a second. declare @p varchar(20) select @p = '0010000546163' select top 1 id from table where name = @p order by id desc Why is the execution plan different? Is there a way to change the second method to seek? thanks EDIT Table looks like CREATE TABLE [table] ( [Id] [int] IDENTITY (1, 1) NOT NULL , [Name] [varchar] (13) COLLATE Latin1_General_CI_AS NOT NULL) Id is primary clustered key There is a non-unique index on Name and a unique composite index on id/name There are other columns - left them out for brevity

    Read the article

  • Minimizing distance to a weighted grid

    - by Andrew Tomazos - Fathomling
    Lets suppose you have a 1000x1000 grid of positive integer weights W. We want to find the cell that minimizes the average weighted distance.to each cell. The brute force way to do this would be to loop over each candidate cell and calculate the distance: int best_x, best_y, best_dist; for x0 = 1:1000, for y0 = 1:1000, int total_dist = 0; for x1 = 1:1000, for y1 = 1:1000, total_dist += W[x1,y1] * sqrt((x0-x1)^2 + (y0-y1)^2); if (total_dist < best_dist) best_x = x0; best_y = y0; best_dist = total_dist; This takes ~10^12 operations, which is too long. Is there a way to do this in or near ~10^8 or so operations?

    Read the article

  • web service data type (contract)

    - by cyberguest
    hi, i have a general design question. we have a fairly big data model that represents an clinical object, the object itself has 200+ child attributes in the hierarchy. and we have a SetObject operation, and a GetObject operation. my question is, best practice wise, would it make sense to use that single data model in both operations or different data model for each? Because the Get operation will return much more details than what's needed for Set. an example of what i mean: the data model has say ProviderId, and ProviderName attributes, in the Get operation, both the ProviderId, and ProviderName would need to be returned. However, in the Set operation, only the ProviderId is needed, and ProviderName is ignored by the service since system has that information already. In this case, if the Get and Set operations use the same data model, the ProviderName is exposed even for Set operation, does that confuse the consuming developer?

    Read the article

  • Delete from empty table taking forver

    - by Will
    Hello, I have an empty table that previously had a large amount of rows. The table has about 10 columns and indexes on many of them, as well as indexes on multiple columns. DELETE FROM item WHERE 1=1 This takes approximately 40 seconds to complete SELECT * FROM item this takes 4 seconds. The execution plan of SELECT * FROM ITEM shows the following; SQL> select * from midas_item; no rows selected Elapsed: 00:00:04.29 Execution Plan ---------------------------------------------------------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=19 Card=123 Bytes=73 80) 1 0 TABLE ACCESS (FULL) OF 'MIDAS_ITEM' (Cost=19 Card=123 Byte s=7380) Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 5263 consistent gets 5252 physical reads 0 redo size 1030 bytes sent via SQL*Net to client 372 bytes received via SQL*Net from client 1 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 0 rows processed any idea why these would be taking so long and how to fix it would be greatly appreciated!!

    Read the article

  • get function address from name [.debug_info ??]

    - by user361190
    Hi, I was trying to write a small debug utility and for this I need to get the function/global variable address given its name. This is built-in debug utility, which means that the debug utility will run from within the code to be debugged or in plain words I cannot parse the executable file. Now is there a well-known way to do that ? The plan I have is to make the .debug_* sections to to be loaded into to memory [which I plan to do by a cheap trick like this in ld script] .data { *(.data) __sym_start = .; (debug_); __sym_end = .; } Now I have to parse the section to get the information I need, but I am not sure this is doable or is there issues with this - this is all just theory. But it also seems like too much of work :-) is there a simple way. Or if someone can tell upfront why my scheme will not work, it ill also be helpful. Thanks in Advance, Alex.

    Read the article

  • Why does using set -e cause my script to fail when called in crontab

    - by SDGuero
    I have a bash script that performs several file operations. When any user runs this script, it executes successfully and outputs a few lines of text but when I try to cron it there are problems. It seems to run (I see an entry in cron log showing it was kicked off) but nothing happens, it doesn't output anything and doesn't do any of its file operations. It also doesn't appear in the running processes anywhere so it appears to be exiting out immediately. After some troubleshooting I found that removing "set -e" resolved the issue, it now runs from the system cron without a problem. So it works, but I'd rather have set -e enabled so the script exits if there is an error. Does anyone know why "set -e" is causing my script to exit? Thanks for the help, Ryan

    Read the article

  • How to work with images(png's) of size 2-4Mb.

    - by Sam
    I am working with images of size 2 to 4MB. I want to edit image of resolution 1200x1600. performing scaling, traslation and rotation operations. I want to another images on that and saving it to photo album. My app is crashing(giving memory warning) after i successfully edit one image and save to album. I have releasing some images when i get memory warning. But still it crashes as i am working with 2 images of size 3MB each and context of size 1200x1600 and getting a image from the context at the same time. Is there any way to compress images and work with it by performing scaling, traslation and rotation operations?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >