Search Results

Search found 3521 results on 141 pages for 'parallel computing'.

Page 132/141 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Help me understand this "Programming pearls" bitsort program

    - by ardsrk
    Jon Bentley in Column 1 of his book programming pearls introduces a technique for sorting a sequence of non-zero positive integers using bit vectors. I have taken the program bitsort.c from here and pasted it below: /* Copyright (C) 1999 Lucent Technologies */ /* From 'Programming Pearls' by Jon Bentley */ /* bitsort.c -- bitmap sort from Column 1 * Sort distinct integers in the range [0..N-1] */ #include <stdio.h> #define BITSPERWORD 32 #define SHIFT 5 #define MASK 0x1F #define N 10000000 int a[1 + N/BITSPERWORD]; void set(int i) { int sh = i>>SHIFT; a[i>>SHIFT] |= (1<<(i & MASK)); } void clr(int i) { a[i>>SHIFT] &= ~(1<<(i & MASK)); } int test(int i){ return a[i>>SHIFT] & (1<<(i & MASK)); } int main() { int i; for (i = 0; i < N; i++) clr(i); /*Replace above 2 lines with below 3 for word-parallel init int top = 1 + N/BITSPERWORD; for (i = 0; i < top; i++) a[i] = 0; */ while (scanf("%d", &i) != EOF) set(i); for (i = 0; i < N; i++) if (test(i)) printf("%d\n", i); return 0; } I understand what the functions clr, set and test are doing and explain them below: ( please correct me if I am wrong here ). clr clears the ith bit set sets the ith bit test returns the value at the ith bit Now, I don't understand how the functions do what they do. I am unable to figure out all the bit manipulation happening in those three functions. Please help.

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • MacPorts 1.8.2 fails to build db46 on Mac OS X 1.6.3

    - by themoch
    I'm trying to put a development environment on my Mac, and to do so I need to install several packages which require db46. When running sudo port install db46 I get the following error: ---> Computing dependencies for db46 ---> Fetching db46 ---> Attempting to fetch patch.4.6.21.1 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.2 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.3 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.4 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch db-4.6.21.tar.gz from http://distfiles.macports.org/db4/4.6.21_6 ---> Verifying checksum(s) for db46 ---> Extracting db46 ---> Applying patches to db46 ---> Configuring db46 ---> Building db46 Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/work/db-4.6.21/build_unix" && /usr/bin/make -j2 all " returned error 2 Command output: ../dist/../libdb_java/db_java_wrap.c:9464: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9487: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9509: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9532: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9563: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9588: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9613: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9638: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9666: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9691: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9716: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9739: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9771: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9796: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9819: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9842: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9867: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jobject' ../dist/../libdb_java/db_java_wrap.c:9899: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9920: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9943: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9966: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jstring' ../dist/../libdb_java/db_java_wrap.c:9991: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:10010: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10046: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10071: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' make: *** [db_java_wrap.lo] Error 1 make: *** Waiting for unfinished jobs.... Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. cd ./classes && jar cf ../db.jar ./com/sleepycat Error: Status 1 encountered during processing. I have removed my /usr/local folder completely and it does not seem to help.

    Read the article

  • multithreading with database

    - by Darsin
    I am looking out for a strategy to utilize multithreading (probably asynchronous delegates) to do a synchronous operation. I am new to multithreading so i will outline my scenario first. This synchronous operation right now is done for one set of data (portfolio) based on the the parameters provided. The (psudeo-code) implementation is given below: public DataSet DoTests(int fundId, DateTime portfolioDate) { // Get test results for the portfolio // Call the database adapter method, which in turn is a stored procedure, // which in turns runs a series of "rule" stored procs and fills a local temp table and returns it back. DataSet resultsDataSet = GetTestResults(fundId, portfolioDate); try { // Do some local processing on the results DoSomeProcessing(resultsDataSet); // Save the results in Test, TestResults and TestAllocations tables in a transaction. // Sets a global transaction which is provided to all the adapter methods called below // It is defined in the Base class StartTransaction("TestTransaction"); // Save Test and get a testId int testId = UpdateTest(resultsDataSet); // Adapter method, uses the same transaction // Update testId in the other tables in the dataset UpdateTestId(resultsDataSet, testId); // Update TestResults UpdateTestResults(resultsDataSet); // Adapter method, uses the same transaction // Update TestAllocations UpdateTestAllocations(resultsDataSet); // Adapter method, uses the same transaction // It is defined in the base class CommitTransaction("TestTransaction"); } catch { RollbackTransaction("TestTransaction"); } return resultsDataSet; } Now the requirement is to do it for multiple set of data. One way would be to call the above DoTests() method in a loop and get the data. I would prefer doing it in parallel. But there are certain catches: StartTransaction() method creates a connection (and transaction) every time it is called. All the underlying database tables, procedures are the same for each call of DoTests(). (obviously). Thus my question are: Will using multithreading anyway improve performance? What are the chances of deadlock especially when new TestId's are being created and the Tests, TestResults and TestAllocations are being saved? How can these deadlocked be handled? Is there any other more efficient way of doing the above operation apart from looping over the DoTests() method repeatedly?

    Read the article

  • Macports 1.8.2 fails to build db46 on os x 1.6.3

    - by themoch
    i'm trying to put a dev environment on my mac, and to do so i need to install several packages which require db46 when running sudo port install db46 i get the following error: ---> Computing dependencies for db46 ---> Fetching db46 ---> Attempting to fetch patch.4.6.21.1 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.2 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.3 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch patch.4.6.21.4 from http://www.oracle.com/technology/products/berkeley-db/db/update/4.6.21/ ---> Attempting to fetch db-4.6.21.tar.gz from http://distfiles.macports.org/db4/4.6.21_6 ---> Verifying checksum(s) for db46 ---> Extracting db46 ---> Applying patches to db46 ---> Configuring db46 ---> Building db46 Error: Target org.macports.build returned: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_databases_db46/work/db-4.6.21/build_unix" && /usr/bin/make -j2 all " returned error 2 Command output: ../dist/../libdb_java/db_java_wrap.c:9464: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9487: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9509: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9532: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9563: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9588: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9613: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:9638: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9666: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9691: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jlong' ../dist/../libdb_java/db_java_wrap.c:9716: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9739: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9771: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9796: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9819: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9842: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9867: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jobject' ../dist/../libdb_java/db_java_wrap.c:9899: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9920: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9943: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:9966: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jstring' ../dist/../libdb_java/db_java_wrap.c:9991: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'jint' ../dist/../libdb_java/db_java_wrap.c:10010: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10046: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' ../dist/../libdb_java/db_java_wrap.c:10071: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'void' make: *** [db_java_wrap.lo] Error 1 make: *** Waiting for unfinished jobs.... Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. cd ./classes && jar cf ../db.jar ./com/sleepycat Error: Status 1 encountered during processing. i have removed my /usr/local folder completely and it does not seem to help

    Read the article

  • Several client waiting for the same event

    - by ff8mania
    I'm developing a communication API to be used by a lot of generic clients to communicate with a proprietary system. This proprietary system exposes an API, and I use a particular classes to send and wait messages from this system: obviously the system alert me that a message is ready using an event. The event is named OnMessageArrived. My idea is to expose a simple SendSyncMessage(message) method that helps the user/client to simply send a message and the method returns the response. The client: using ( Communicator c = new Communicator() ) { response = c.SendSync(message); } The communicator class is done in this way: public class Communicator : IDisposable { // Proprietary system object ExternalSystem c; String currentRespone; Guid currentGUID; private readonly ManualResetEvent _manualResetEvent; private ManualResetEvent _manualResetEvent2; String systemName = "system"; String ServerName = "server"; public Communicator() { _manualResetEvent = new ManualResetEvent(false); //This methods are from the proprietary system API c = SystemInstance.CreateInstance(); c.Connect(systemName , ServerName); } private void ConnectionStarter( object data ) { c.OnMessageArrivedEvent += c_OnMessageArrivedEvent; _manualResetEvent.WaitOne(); c.OnMessageArrivedEvent-= c_OnMessageArrivedEvent; } public String SendSync( String Message ) { Thread _internalThread = new Thread(ConnectionStarter); _internalThread.Start(c); _manualResetEvent2 = new ManualResetEvent(false); String toRet; int messageID; currentGUID = Guid.NewGuid(); c.SendMessage(Message, "Request", currentGUID.ToString()); _manualResetEvent2.WaitOne(); toRet = currentRespone; return toRet; } void c_OnMessageArrivedEvent( int Id, string root, string guid, int TimeOut, out int ReturnCode ) { if ( !guid.Equals(currentGUID.ToString()) ) { _manualResetEvent2.Set(); ReturnCode = 0; return; } object newMessage; c.FetchMessage(Id, 7, out newMessage); currentRespone = newMessage.ToString(); ReturnCode = 0; _manualResetEvent2.Set(); } } I'm really noob in using waithandle, but my idea was to create an instance that sends the message and waits for an event. As soon as the event arrived, checks if the message is the one I expect (checking the unique guid), otherwise continues to wait for the next event. This because could be (and usually is in this way) a lot of clients working concurrently, and I want them to work parallel. As I implemented my stuff, at the moment if I run client 1, client 2 and client 3, client 2 starts sending message as soon as client 1 has finished, and client 3 as client 2 has finished: not what I'm trying to do. Can you help me to fix my code and get my target? Thanks!

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Log4j: Issues about the FallbackErrorHandler

    - by rdogpink
    I am working on a client-server-application and wanted to implement a flexible Loggingframework, so I chose log4j, which doesn´t really evolve anymore, but it is still handy framework. Because the Logging happens along the network, i wanted a solution for the case, that the network drive isn´t available, so the Logger has to change its destination file(s). Now I wanted to use the FallbackErrorHandler (configured with a XML-File) from the Log4j-library and the implementation worked: When my network drive isn´t available, it switches to a local Logfile, so no logging should be lost. But I headded two problems since yesterday and couldn´t figure or find out, how to solve it. No return to initial Logging Configuration: When the network drive is on again and the Logger could write to the old destinations, log4j still logs at the local drive and I can´t figure out, how to notify the original (primary) Logger to start again. I also tried to attach a second Appender to the ErrorHandler, which should mirror the failed primary Logger, that it tries to write on the network destination and when the network is on again, it logs in both files, on the local and on the network drive. But unfortunately it didn´t work out, I only got a failure message that the ErrorHandler-content doesn´t fit. log4j:WARN The content of element type "errorHandler" must match "(param*,root-ref?,logger-ref*,appender-ref?)". This is the responsible code. <appender name="TraceAppender" class="org.apache.log4j.DailyRollingFileAppender"> <!-- The second appender-ref "TestAppender" leads to the error. --> <errorHandler class="org.apache.log4j.varia.FallbackErrorHandler"> <logger-ref ref="com.idoh"/> <appender-ref ref="TraceFallbackAppender"/> <appender-ref ref="TestAppender"/> </errorHandler> <param name="datePattern" value=".yyyy-MM-dd" /> <param name="file" value="logs/Trace.txt" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%-6r %d{HH:mm:ss,SSS} [%t] %-5p - %m%n"/> </layout> </appender> So, how could I trigger log4j to reset to initial configuration or hold a second appender parallel to the "Local-Logger". My Application should work by itself and shouldn´t have to be restarted often. First Error message is swallowed: I recognized, that the first message, which leads to the switching between the primary logger and the FallbackErrorHandler (for example a logging-request to a readonly-File), is swallowed, so neither the primary logger logs it (because it can´t) nor the backup-Logger knows what it missed. So anybody else ran in this problem and could solve it? Or has any suggestions?

    Read the article

  • objective C convert NSString to unsigned

    - by user1501354
    I have changed my question. I want to convert an NSString to an unsigned int. Why? Because I want to do parallel payment in PayPal. Below I have given my coding in which I want to convert the NSString to an unsigned int. My query is: //optional, set shippingEnabled to TRUE if you want to display shipping //options to the user, default: TRUE [PayPal getPayPalInst].shippingEnabled = TRUE; //optional, set dynamicAmountUpdateEnabled to TRUE if you want to compute //shipping and tax based on the user's address choice, default: FALSE [PayPal getPayPalInst].dynamicAmountUpdateEnabled = TRUE; //optional, choose who pays the fee, default: FEEPAYER_EACHRECEIVER [PayPal getPayPalInst].feePayer = FEEPAYER_EACHRECEIVER; //for a payment with multiple recipients, use a PayPalAdvancedPayment object PayPalAdvancedPayment *payment = [[PayPalAdvancedPayment alloc] init]; payment.paymentCurrency = @"USD"; // A payment note applied to all recipients. payment.memo = @"A Note applied to all recipients"; //receiverPaymentDetails is a list of PPReceiverPaymentDetails objects payment.receiverPaymentDetails = [NSMutableArray array]; NSArray *emailArray = [NSArray arrayWithObjects:@"[email protected]",@"[email protected]", nil]; for (int i = 1; i <= 2; i++) { PayPalReceiverPaymentDetails *details = [[PayPalReceiverPaymentDetails alloc] init]; // Customize the payment notes for one of the three recipient. if (i == 2) { details.description = [NSString stringWithFormat:@"Component %d", i]; } details.recipient = [NSString stringWithFormat:@"%@",[emailArray objectAtIndex:i-1]]; unsigned order; if (i==1) { order = [[feeArray objectAtIndex:0] unsignedIntValue]; } if (i==2) { order = [[amountArray objectAtIndex:0] unsignedIntValue]; } //subtotal of all items for this recipient, without tax and shipping details.subTotal = [NSDecimalNumber decimalNumberWithMantissa:order exponent:-4 isNegative:FALSE]; //invoiceData is a PayPalInvoiceData object which contains tax, shipping, and a list of PayPalInvoiceItem objects details.invoiceData = [[PayPalInvoiceData alloc] init]; //invoiceItems is a list of PayPalInvoiceItem objects //NOTE: sum of totalPrice for all items must equal details.subTotal //NOTE: example only shows a single item, but you can have more than one details.invoiceData.invoiceItems = [NSMutableArray array]; PayPalInvoiceItem *item = [[PayPalInvoiceItem alloc] init]; item.totalPrice = details.subTotal; [details.invoiceData.invoiceItems addObject:item]; [payment.receiverPaymentDetails addObject:details]; } [[PayPal getPayPalInst] advancedCheckoutWithPayment:payment]; Can anybody tell me how to do this conversion? Thanks and regards in advance.

    Read the article

  • jQuery: Giving each matched element an unique ID

    - by AnGafraidh
    I am writing an 'inline translator' application to be used with a cloud computing platform to extend non-supported languages. The majority of this uses jQuery to find the text value, replace it with the translation, then append the element with a span tag that has an unique ID, to be used elsewhere within the application. The problem arises however, when there are more than one element, say , that have the exact same value to be translated (matched elements). What happens in the function in question is that it puts all matched elements in the same span, taking the second, third, fourth, etc. out of their parent tags. My code is pretty much like this example: <script src='jquery-1.4.2.js'></script> <script> jQuery.noConflict(); var uniqueID='asdfjkl'; jQuery(window).ready(function() { var myQ1 = jQuery("input[id~=test1]"); myClone=myQ1.clone(); myClone.val('Replaced this button'); myQ1.replaceWith('<span id='+uniqueID+'></span>'); jQuery('#'+uniqueID).append(myClone); }); </script> <table> <tr><td> <input id='test1' type='button' value="I'm a button!"></input> &nbsp; <input id='test2' type='button' value="And so am I"></input> </tr></td> <tr><td> <input id='test1' type='button' value="I'm a button!"></input> </tr></td> </table> As a workaround, I've experimented with using a loop to create a class for each span, rising in increment until jQuery("input[id~=test1]").length, but I can't seem to get anything I do to work. Is there any way to give each matched element an unique ID? My fluency in jQuery is being put to the test! Thanks for any help in advance. Aaron

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • Rendering of graphics different depending on position

    - by jedierikb
    When drawing parallel vertical lines with a fixed distance between them (1.75 pixels) with a non-integer x-value-offset to both lines, the lines are drawn differently based on the offset. In the picture below are two pairs of very close together vertical lines. As you can see, they look very different. This is frustrating, especially when animating the sprite. Any ideas how ensure that sprites-with-non-integer-positions' graphics will visually display the same? package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; public class tmpa extends Sprite { private var _sp1:Sprite; private var _sp2:Sprite; private var _num:Number; public function tmpa( ):void { stage.align = StageAlign.TOP_LEFT; stage.scaleMode = StageScaleMode.NO_SCALE; _sp1 = new Sprite( ); drawButt( _sp1, 0 ); _sp1.x = 100; _sp1.y = 100; _num = 0; _sp2 = new Sprite( ); drawButt( _sp2, _num ); _sp2.x = 100; _sp2.y = 200; addChild( _sp1 ); addChild( _sp2 ); addEventListener( Event.ENTER_FRAME, efCb, false, 0, true ); } private function efCb( evt:Event ):void { _num += .1; if (_num > 400) { _num = 0; } drawButt( _sp2, _num ); } private function drawButt( sp:Sprite, offset:Number ):void { var px1:Number = 1 + offset; var px2:Number = 2.75 + offset; sp.graphics.clear( ); sp.graphics.lineStyle( 1, 0, 1, true ); sp.graphics.moveTo( px1, 1 ); sp.graphics.lineTo( px1, 100 ); sp.graphics.lineStyle( 1, 0, 1, true ); sp.graphics.moveTo( px2, 1 ); sp.graphics.lineTo( px2, 100 ); } } } edit from original post which thought the problem was tied to the x-position of the sprite.

    Read the article

  • Separation of domain and ui layer in a composite

    - by hansmaad
    Hi all, i'm wondering if there is a pattern how to separate the domain logic of a class from the ui responsibilities of the objects in the domain layer. Example: // Domain classes interface MachinePart { CalculateX(in, out) // Where do we put these: // Draw(Screen) ?? // ShowProperties(View) ?? // ... } class Assembly : MachinePart { CalculateX(in, out) subParts } class Pipe : MachinePart { CalculateX(in, out) length, diamater... } There is an application that calculates the value X for machines assembled from many machine parts. The assembly is loaded from a file representation and is designed as a composite. Each concrete part class stores some data to implement the CalculateX(in,out) method to simulate behaviour of the whole assembly. The application runs well but without GUI. To increase the usability a GUi should be developed on top of the existing implementation (changes to the existing code are allowed). The GUI should show a schematic graphical representation of the assembly and provide part specific dialogs to edit several parameters. To achieve these goals the application needs new functionality for each machine part to draw a schematic representation on the screen, show a property dialog and other things not related to the domain of machine simulation. I can think of some different solutions to implement a Draw(Screen) functionality for each part but i am not happy with each of them. First i could add a Draw(Screen) method to the MachinePart interface but this would mix-up domain code with ui code and i had to add a lot of functionality to each machine part class what makes my domain model hard to read and hard to understand. Another "simple" solution is to make all parts visitable and implement ui code in visitors but Visitor does not belong to my favorite patterns. I could derive UI variants from each machine part class to add the UI implementation there but i had to check if each part class is suited for inheritance and had to be careful on changes to the base classes. My currently favorite design is to create a parallel composite hierarchy where each component stores data to define a machine part, has implementation for UI methods and a factory method which creates instances of the corresponding domain classes, so that i can "convert" a UI assembly to a domain assembly. But there are problems to go back from the created domain hierarchy to the UI hierarchy for showing calculation results in the drawing for example (imagine some parts store some values during the calculation i want to show in the schematic representation after the simluation). Maybe there are some proven patterns for such problems?

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Collision Detection probelm (intersection with plane)

    - by Demi
    I'm doing a scene using openGL (a house). I want to do some collision detection, mainly with the walls in the house. I have tried the following code: // a plane is represented with a normal and a position in space Vector planeNor(0,0,1); Vector position(0,0,-10); Plane p(planeNor,position); Vector vel(0,0,-1); double lamda; // this is the intersection point Vector pNormal; // the normal of the intersection // this method is from Nehe's Lesson 30 coll= p.TestIntersionPlane(vel,Z,lamda,pNormal); glPushMatrix(); glBegin(GL_QUADS); if(coll) glColor3f(1,0,0); else glColor3f(1,1,1); glVertex3d(0,0,-10); glVertex3d(3,0,-10); glVertex3d(3,3,-10); glVertex3d(0,3,-10); glEnd(); glPopMatrix(); Nehe's method: #define EPSILON 1.0e-8 #define ZERO EPSILON bool Plane::TestIntersionPlane(const Vector3 & position,const Vector3 & direction, double& lamda, Vector3 & pNormal) { double DotProduct=direction.scalarProduct(normal); // Dot Product Between Plane Normal And Ray Direction double l2; // Determine If Ray Parallel To Plane if ((DotProduct<ZERO)&&(DotProduct>-ZERO)) return false; l2=(normal.scalarProduct(position))/DotProduct; // Find Distance To Collision Point if (l2<-ZERO) // Test If Collision Behind Start return false; pNormal= normal; lamda=l2; return true; } Z is initially (0,0,0) and every time I move the camera towards the plane, I reduce its z component by 0.1 (i.e. Z.z-=0.1 ). I know that the problem is with the vel vector, but I can't figure out what the right value should be. Can anyone please help me?

    Read the article

  • Asyncronous While Loop?

    - by o7th Web Design
    I have a pretty great SqlDataReader wrapper in which I can map the output into a strongly typed list. What I am finding now is that on larger datasets with larger numbers of columns, performance could probably be a bit better if I can optimize my mapping. In thinking about this there is one section in particular that I am concerned about as it seems to be the heaviest hitter: while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); What I would really like to know, is if there is a way that I can make this loop asyncronous? I feel that will make all the difference in the world with this beast :) Here is the entire Map method in case anyone can see where I can make further improvements on it... IList<T> Map<T> // Map our datareader object to a strongly typed list private static IList<T> Map<T>(IDataReader _Rdr) where T : new() { try { Type _t = typeof(T); List<T> _en = new List<T>(); Hashtable _ht = new Hashtable(); PropertyInfo[] _props = _t.GetProperties(); Parallel.ForEach(_props, info => { _ht[info.Name.ToUpper()] = info; }); while (_Rdr.Read()) { T newObject = new T(); for (int i = 0; i <= _Rdr.FieldCount - 1; ++i) { PropertyInfo info = (PropertyInfo)_ht[_Rdr.GetName(i).ToUpper()]; if ((info != null) && info.CanWrite) { info.SetValue(newObject, (_Rdr.GetValue(i) is DBNull) ? default(T) : _Rdr.GetValue(i), null); } } _en.Add(newObject); } _Rdr.Close(); return _en; }catch(Exception ex){ _Msg += "Wrapper.Map Exception: " + ex.Message; ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.Wrapper.Map", _Msg); return default(IList<T>); } }

    Read the article

  • iproute2 not functioning ("RTNETLINK answers: Operation not supported")

    - by James Watt
    The command and error message: gtwy ~ # ip rule add from 64.251.23.186 table t1 RTNETLINK answers: Operation not supported Older article of the same problem, but it did not help me: http://forums.gentoo.org/viewtopic-t-696982-start-0-postdays-0-postorder-asc-highlight-.html I have looked on google at great lengths to try to find a solution. It seems that my kernel configuration is missing something? Any help or ideas would be appreciated. My system/kernel is: 2.6.36-gentoo-r5 #3 SMP Thu Jan 13 10:49:06 EST 2011 x86_64 Intel(R) Xeon(R) CPU X3220 @ 2.40GHz GenuineIntel GNU/Linux. I am posting this on SuperUser since this system is used as a workstation and this problem is unrelated to specific tasks that are handled exclusively by servers. iproute2 is installed: gtwy etc # emerge --search iproute2 Searching... [ Results for search key : iproute2 ] [ Applications found : 1 ] * sys-apps/iproute2 Latest version available: 2.6.35-r2 Latest version installed: 2.6.35-r2 Size of files: 378 kB Homepage: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 Description: kernel routing and traffic control utilities License: GPL-2 A small snippet of my kernel .config (view entire .config): gtwy linux # cat .config | grep NETLINK CONFIG_NETFILTER_NETLINK=y CONFIG_NETFILTER_NETLINK_QUEUE=y CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_CT_NETLINK=y CONFIG_SCSI_NETLINK=y gtwy linux # cat .config | grep IP_ADVANCED_ROUTER CONFIG_IP_ADVANCED_ROUTER=y gtwy linux # cat .config | grep INGRESS CONFIG_NET_SCH_INGRESS=y gtwy linux # cat .config | grep NET_SCHED CONFIG_NET_SCHED=y emerge --info Portage 2.1.9.25 (default/linux/amd64/10.0, gcc-4.1.2, glibc-2.10.1-r1, 2.6.36-gentoo-r5 x86_64) ================================================================= System uname: Linux-2.6.36-gentoo-r5-x86_64-Intel-R-_Xeon-R-_CPU_X3220_@_2.40GHz-with-gentoo-1.12.13 Timestamp of tree: Thu, 13 Jan 2011 01:15:01 +0000 app-shells/bash: 4.0_p37 dev-java/java-config: 1.3.7-r1, 2.1.10 dev-lang/python: 2.4.6, 2.5.4-r4, 2.6.5-r2, 3.1.2-r3 sys-apps/baselayout: 1.12.13 sys-apps/sandbox: 1.6-r2 sys-devel/autoconf: 2.13, 2.65 sys-devel/automake: 1.9.6-r2::<unknown repository>, 1.10.2, 1.11.1 sys-devel/binutils: 2.20.1-r1 sys-devel/gcc: 4.1.2, 4.3.4, 4.4.3-r2 sys-devel/gcc-config: 1.4.1 sys-devel/libtool: 2.2.6b sys-devel/make: 3.81 virtual/os-headers: 2.6.30-r1 (sys-kernel/linux-headers) ACCEPT_KEYWORDS="amd64" ACCEPT_LICENSE="*" CBUILD="x86_64-pc-linux-gnu" CFLAGS="-march=nocona -O2 -pipe" CHOST="x86_64-pc-linux-gnu" CONFIG_PROTECT="/etc /var/bind" CONFIG_PROTECT_MASK="/etc/ca-certificates.conf /etc/env.d /etc/env.d/java/ /etc/fonts/fonts.conf /etc/gconf /etc/php/apache2-php5/ext-active/ /etc/php/cgi-php5/ext-active/ /etc/php/cli-php5/ext-active/ /etc/revdep-rebuild /etc/sandbox.d /etc/terminfo" CXXFLAGS="-march=nocona -O2 -pipe" DISTDIR="/usr/portage/distfiles" FEATURES="assume-digests binpkg-logs distlocks fixlafiles fixpackages news parallel-fetch protect-owned sandbox sfperms strict unknown-features-warn unmerge-logs unmerge-orphans userfetch" GENTOO_MIRRORS="http://gentoo.chem.wisc.edu/gentoo" LC_ALL="en_US.UTF-8" LDFLAGS="-Wl,-O1 -Wl,--as-needed" LINGUAS="en" MAKEOPTS="-j5" PKGDIR="/usr/portage/packages" PORTAGE_CONFIGROOT="/" PORTAGE_RSYNC_OPTS="--recursive --links --safe-links --perms --times --compress --force --whole-file --delete --stats --timeout=180 --exclude=/distfiles --exclude=/local --exclude=/packages" PORTAGE_TMPDIR="/var/tmp" PORTDIR="/usr/portage" PORTDIR_OVERLAY="/usr/local/portage" SYNC="rsync://rsync.namerica.gentoo.org/gentoo-portage" USE="acl amd64 apache2 berkdb bzip2 cli cracklib crypt ctype cups curl cxx dri fortran gdbm gpm iconv jpeg jpeg2k libwww mmx modules mudflap multilib mysql ncurses nls nptl nptlonly openmp pam pcre perl php png pppd python readline session sockets sse sse2 ssl symlink sysfs tcpd threads unicode vhosts xml xorg xsl zlib" ALSA_CARDS="ali5451 als4000 atiixp atiixp-modem bt87x ca0106 cmipci emu10k1x ens1370 ens1371 es1938 es1968 fm801 hda-intel intel8x0 intel8x0m maestro3 trident usb-audio via82xx via82xx-modem ymfpci" ALSA_PCM_PLUGINS="adpcm alaw asym copy dmix dshare dsnoop empty extplug file hooks iec958 ioplug ladspa lfloat linear meter mmap_emul mulaw multi null plug rate route share shm softvol" APACHE2_MODULES="actions alias auth_basic authn_alias authn_anon authn_dbm authn_default authn_file authz_dbm authz_default authz_groupfile authz_host authz_owner authz_user autoindex cache cgi cgid dav dav_fs dav_lock deflate dir disk_cache env expires ext_filter file_cache filter headers include info log_config logio mem_cache mime mime_magic negotiation rewrite setenvif speling status unique_id userdir usertrack vhost_alias" COLLECTD_PLUGINS="df interface irq load memory rrdtool swap syslog" ELIBC="glibc" GPSD_PROTOCOLS="ashtech aivdm earthmate evermore fv18 garmin garmintxt gpsclock itrax mtk3301 nmea ntrip navcom oceanserver oldstyle oncore rtcm104v2 rtcm104v3 sirf superstar2 timing tsip tripmate tnt ubx" INPUT_DEVICES="keyboard mouse evdev" KERNEL="linux" LCD_DEVICES="bayrad cfontz cfontz633 glk hd44780 lb216 lcdm001 mtxorb ncurses text" LINGUAS="en" PHP_TARGETS="php5-3" RUBY_TARGETS="ruby18" USERLAND="GNU" VIDEO_CARDS="fbdev glint intel mach64 mga neomagic nouveau nv r128 radeon savage sis tdfx trident vesa via vmware dummy v4l" XTABLES_ADDONS="quota2 psd pknock lscan length2 ipv4options ipset ipp2p iface geoip fuzzy condition tee tarpit sysrq steal rawnat logmark ipmark dhcpmac delude chaos account" Unset: CPPFLAGS, CTARGET, EMERGE_DEFAULT_OPTS, FFLAGS, INSTALL_MASK, LANG, PORTAGE_BUNZIP2_COMMAND, PORTAGE_COMPRESS, PORTAGE_COMPRESS_FLAGS, PORTAGE_RSYNC_EXTRA_OPTS

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Bypass BIOS password set by faulty Toshiba firmware on Satellite A55 laptop?

    - by Brian
    How can the CMOS be cleared on the Toshiba Satellite A55-S1065? I have this 7 year old laptop that has been crippled by a glitch in its BIOS: 'A "Password =" prompt may be displayed when the computer is turned on, even though no power-on password has been set. If this happens, there is no password that will satisfy the password request. The computer will be unusable until this problem is resolved. [..] The occurrence of this problem on any particular computer is unpredictable -- it may never happen, but it could happen any time that the computer is turned on. [..] Toshiba will cover the cost of this repair under warranty until Dec 31, 2010.' -Toshiba As they stated, this machine is "unusable." The escape key does not bypass the prompt (nor does any other key), thus no operating system can be booted and no firmware updates can be installed. After doing some research, I found solutions that have been suggested for various Toshiba Satellite models afflicted by this glitch: "Make arrangements with a Toshiba Authorized Service Provider to have this problem resolved." -Toshiba (same link). Even prior to the expiration of Toshiba's support ("repair under warranty until Dec 31, 2010"), there have been reports that this solution is prohibitively expensive, labor charges accruing even when the laptop is still under warranty, and other reports that are generally discouraging: "They were unable to fix it and the guy who worked on it said he couldn’t find the jumpers on the motherboard to clear the BIOS. I paid $39 for my troubles and still have the password problem." - Steve. Since the costs of the repairs can now exceed the value of the hardware, it would seem this is a DIY solution, or a non-solution (i.e. the hardware is trash). Build a Toshiba parallel loopback by stripping and soldering the wires on a DB25 plug to connect connect these pins: 1-5-10, 2-11, 3-17, 4-12, 6-16, 7-13, 8-14, 9-15, 18-25. -CGSecurity. According to a list of supported models on pwcrack, this will likely not work for my Satellite A55-1065 (as well as many other models of similar age). -pwcrack Disconnect the laptop battery for an extended period of time. Doesn't work, laptop sat in a closet for several years without the battery connected and I forgot about the whole thing for awhile. The poor thing. Clear CMOS by setting the proper jumper setting or by removing the CMOS (RTC) battery, or by short circuiting a (hidden?) jumper that looks like a pair of solder marks -various sources for various Satellite models: Satellite A105: "you will see C88 clearly labeled right next the jack that the wireless card plugs into. There are two little solder squares (approx 1/16") at this location" -kerneltrap Satellite 1800: "Underneath the RAM there is black sticker, peel off the black sticker and you will reveal two little solder marks which are actually 'jumpers'. Very carefully hold a flat-head screwdriver touching both points and power on the unit briefly, effectively 'shorting' this circuit." -shadowfax2020 Satellite L300: "Short the B500 solder pads on the system board." -Lester Escobar Satellite A215: "Short the B500 solder pads on the system board." -fixya Clearing the CMOS could resolve the issue, but I cannot locate a jumper or a battery on this board. Nothing that looks remotely like a battery can be removed (everything is soldered). I have looked closely at the area around the memory and do not see any obvious solder pads that could be a secret jumper. Here are pictures (click for full resolution) : Where is the jumper (or solder pads) to short circuit and wipe the CMOS on this board? Possibly related questions: Remove Toshiba laptop BIOS password? Password Problem Toshiba Satellite..

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • Sharing the same `ssh-agent` among multiple login sessions

    - by intuited
    Is there a convenient way to ensure that all logins from a given user (ie me) use the same ssh-agent? I hacked out a script to make this work most of the time, but I suspected all along that there was some way to do it that I had just missed. Additionally, since that time there have been amazing advances in computing technology, like for example this website. So the goal here is that whenever I log in to the box, regardless of whether it's via SSH, or in a graphical session started from gdm/kdm/etc, or at a console: if my username does not currently have an ssh-agent running, one is started, the environment variables exported, and ssh-add called. otherwise, the existing agent's coordinates are exported in the login session's environment variables. This facility is especially valuable when the box in question is used as a relay point when sshing into a third box. In this case it avoids having to type in the private key's passphrase every time you ssh in and then want to, for example, do git push or something. The script given below does this mostly reliably, although it botched recently when X crashed and I then started another graphical session. There might have been other screwiness going on in that instance. Here's my bad-is-good script. I source this from my .bashrc. # ssh-agent-procure.bash # v0.6.4 # ensures that all shells sourcing this file in profile/rc scripts use the same ssh-agent. # copyright me, now; licensed under the DWTFYWT license. mkdir -p "$HOME/etc/ssh"; function ssh-procure-launch-agent { eval `ssh-agent -s -a ~/etc/ssh/ssh-agent-socket`; ssh-add; } if [ ! $SSH_AGENT_PID ]; then if [ -e ~/etc/ssh/ssh-agent-socket ] ; then SSH_AGENT_PID=`ps -fC ssh-agent |grep 'etc/ssh/ssh-agent-socket' |sed -r 's/^\S+\s+(\S+).*$/\1/'`; if [[ $SSH_AGENT_PID =~ [0-9]+ ]]; then # in this case the agent has already been launched and we are just attaching to it. ##++ It should check that this pid is actually active & belongs to an ssh instance export SSH_AGENT_PID; SSH_AUTH_SOCK=~/etc/ssh/ssh-agent-socket; export SSH_AUTH_SOCK; else # in this case there is no agent running, so the socket file is left over from a graceless agent termination. rm ~/etc/ssh/ssh-agent-socket; ssh-procure-launch-agent; fi; else ssh-procure-launch-agent; fi; fi; Please tell me there's a better way to do this. Also please don't nitpick the inconsistencies/gaffes ( eg putting var stuff in etc ); I wrote this a while ago and have since learned many things.

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • I cut-to-move DCIM folder to ext SD when an auto android OS update popped up b4 I could choose target - Cannot recover 200+ photos

    - by ZeroG
    I was downloading my Exhibit II's DCIM camera folder (with month's of photos inside) to its external SD card, in order to transfer them into my laptop. In my overconfidence, I hurriedly chose cut-to-move (rather than copy-to-move) when KABOOM! —an automatic Android OS update popped up before I could choose the target!!! I figured everything was in cache & calmly tried to go through with the update. But that was not a typically seamless event. It showed downloading icon but hmm… since I rooted the phone it brought the command line up & recovery sequence. But neither Android nor I had yet downloaded any alternate custom ROM Files to internal SD to update from! So were they trying to make me unroot my phone by giving me some bogus update on the fly or just give me a hard time in trying to hand me down an unrooted ROM that I'd have to figure out how to root again? Yes, I know there was that blurb about overwriting a file of the same name but I was trying to shake the darn stubborn update being forced on my phone during this precarious moment. I thought I had frozen or turned off all those auto-updates previously. Anyway, phones are small & fingers are big (sigh)... I tried to reboot into safe mode but the resultant photo file was partially overwritten (200 files had names but Zero bytes in them). I thought maybe it was still hung in cache or deposited somewhere else but I have searched everywhere with file managers. Since I did not have Titanium backing up camera, photo folder or gallery, I cannot recover 200+ photos. Dumb. You can understand my dilemma as I am involved in the arts & although just a camera phone, most of these photos were historic & aesthetic or at least as to subject matter. Photo-ops don't reoccur. I have tried a couple of recovery apps from the market like Search Duplicates & Recover to no avail. I was only able to salvage stuff I'd sent out in messages. I've got several decades in computers & this is such a miserable beginner's piece of bad luck I can't believe it happened to me. They were precious photos! Yes, I turned on Titanium since & yes I even tried USB to laptop recoveries. Being on a MacBookPro I'm trying androidfiletransfer.dmg, but I'd have to upgrade to Peach Sunrise to get above Android 3.0 for that App to recognize the phone via USB & the programmer says installation zeros your data, so that pretty much toasts any secret hidden places where these photos may have been deposited. Don't want to do that & am still trying to find them. They certainly didn't make it to my external SD Card. If any of you techies out there know anything, please help & thanks. Despite decades of being in computing, unfamiliar & ever-changing hard or software can humble even the most seasoned veterans.

    Read the article

  • SharePoint Upgrade Global Nav Quirks?

    - by elorg
    We're working on a parallel install/upgrade of SharePoint. The client has WSS 2003 on some old hardware. We've installed MOSS 2007 in a medium farm environment. They want to use this as an opportunity to not just upgrade and use the new features, but to also better organize their content and categorize between different site collections. To accommodate, we've created a few site collections per their specifications in the new environment, and when we ran an upgrade test run we ran into a few .. quirks. We made a backup of the old content database, copied it over to the new environment and restored it as a new database. Created a new web app and attached the migrated data to do an in-place upgrade in this new "test" area. This seems pretty standard - no issues. We have to do a little bit of cleanup (e.g. reset pages to site definition, reset themes, and inherit the global nav / top link bar, etc.). Once that's done, we're using stsadm export/import to copy the individual sites over to their ultimate destinations in the various different site collections. So far so good. But then we ran into one particular site that has a link to an .aspx page in the top link bar in WSS 2003 that's not behaving properly after the upgrade. It's just a link to a "dashboard" .aspx page in a doc library - nothing special. It doesn't seem to matter what we do, or what order we do it (in the "test" web app, in the destination web app, or both). In the end, this ONE site will not allow us to create a link/tab in the global nav. It can inherit the global nav just fine. We can break the inheritance just fine. But if we want to manually add a link in the top link bar - we go through the steps that I've done 1,000x before and click OK - and the tab never appears. It doesn't matter if it's to a page within the site itself, or to Google. We can migrate over other sites into the same site collection and add a tab without issue. If we migrate this quirky site over to another site collection we run into the same issue. Yet, in the "test" web app that we're using to upgrade the data we can add a tab? If we add the tab before we export/import to the final destination, the tab is lost during the process? Has anyone run into anything like this? Any ideas? I've tried every combination of everything that I can think of and nothing works. Unless we can figure out how to get this to work, we're going to just add this tab to the global nav for the entire site collection and inherit it for this site (but that adds the link to all of the site that will inherit, which is both a pro & con for them).

    Read the article

  • Varnish VCL - Regular Expression Evaluation

    - by Hugues ALARY
    I have been struggling for the past few days with this problem: Basically, I want to send to a client browser a cookie of the form foo[sha1oftheurl]=[randomvalue] if and only if the cookie has not already been set. e.g. If a client browser requests "/page.html", the HTTP response will be like: resp.http.Set-Cookie = "foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD;" then, if the same client request "/index.html", the HTTP response will contain a header: resp.http.Set-Cookie = "foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY;" In the end, the client browser will have 2 cookies: foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY Now, that, is not complicated in itself. The following code does it: import digest; import random; ##This vmod does not exist, it's just for the example. sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); } sub vcl_deliver() { ## We create a cookie on the client browser by creating a "Set-Cookie" header ## In our case the cookie we create is of the form foo[sha1]=[randomvalue] ## e.g for a URL "/page.html" the cookie will be foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=[randomvalue] set resp.http.Set-Cookie = {""} + resp.http.Set-Cookie + "foo"+req.http.Url-Sha1+"="+req.http.random-value; } However, this code does not take into account the case where the Cookie already exists. I need to check that the Cookie does not exists before generating a random value. So I thought about this code: import digest; import random; sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); set req.http.regex = "abtest"+req.http.Url-Sha1; if(!req.http.Cookie ~ req.http.regex) { set req.http.random-value = random.get_rand(); } } The problem is that Varnish does not compute Regular expression at run time. Which leads to this error when I try to compile: Message from VCC-compiler: Expected CSTR got 'req.http.regex' (program line 940), at ('input' Line 42 Pos 31) if(req.http.Cookie !~ req.http.regex) { ------------------------------##############--- Running VCC-compiler failed, exit 1 VCL compilation failed One could propose to solve my problem by matching on the "abtest" part of the cookie or even "abtest[a-fA-F0-9]{40}": if(!req.http.Cookie ~ "abtest[a-fA-F0-9]{40}") { set req.http.random-value = random.get_rand(); } But this code matches any cookie starting by 'abtest' and containing an hexadecimal string of 40 characters. Which means that if a client requests "/page.html" first, then "/index.html", the condition will evaluate to true even if the cookie for the "/index.html" has not been set. I found in bug report phk or someone else stating that computing regular expressions was extremely expensive which is why they are evaluated during compilation. Considering this, I believe that there is no way of achieving what I want the way I've been trying to. Is there any way of solving this problem, other than writting a vmod? Thanks for your help! -Hugues

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >