Search Results

Search found 4382 results on 176 pages for 'priority queue'.

Page 144/176 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Squid handling of concurrent cache misses

    - by Oliver H-H
    We're using a Squid cache to off-load traffic from our web servers, ie. it's setup as a reverse-proxy responding to inbound requests before they hit our web servers. When we get blitzed with concurrent requests for the same request that's not in the cache, Squid proxies all the requests through to our web ("origin") servers. For us, this behavior isn't ideal: our origin servers gets bogged down trying to fulfill N identical requests concurrently. Instead, we'd like the first request to proxy through to the origin server, the rest of the requests to queue at the Squid layer, and then all be fulfilled by Squid when the origin server has responded to that first request. Does anyone know how to configure Squid to do this? We've read through the documentation multiple times and thoroughly web-searched the topic, but can't figure out how to do it. We use Akamai too and, interestingly, this is its default behavior. (However, Akamai has so many nodes that we still see lots of concurrent requests in certain traffic spike scenarios, even with Akamai's super-node feature enabled.) This behavior is clearly configurable for some other caches, eg. the Ehcache documentation offers the option "Concurrent Cache Misses: A cache miss will cause the filter chain, upstream of the caching filter to be processed. To avoid threads requesting the same key to do useless duplicate work, these threads block behind the first thread." Some folks call this behavior a "blocking cache," since the subsequent concurrent requests block behind the first request until it's fulfilled or timed-out. Thx for looking over my noob question! Oliver

    Read the article

  • show UIAlertView when In app purchase is in progress

    - by edie
    Hi... I've added an UIAlertView that has UIActivityIndicatior as a subview on my application. This alertView only show when the purchase is in progress. I've put my alert view in this way in my StoreObserver: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions { for (SKPaymentTransaction *transaction in transactions) { switch (transaction.transactionState) { case SKPaymentTransactionStatePurchasing: [self stillPurchasing]; // this creates an alertView and shows break; case SKPaymentTransactionStatePurchased: [self completeTransaction:transaction]; break; case SKPaymentTransactionStateFailed: [self failedTransaction:transaction]; break; case SKPaymentTransactionStateRestored: [self restoreTransaction:transaction]; break; default: break; } } } - (void) stillPurchasing { UIAlertView *alert = [[UIAlertView alloc]initWithTitle: @"In App Purchase" message: @"Processing your purchase..." delegate: nil cancelButtonTitle: nil otherButtonTitles: nil]; self.alertView = alert; [alert release]; UIActivityIndicatorView *ind = [[UIActivityIndicatorView alloc]initWithActivityIndicatorStyle: UIActivityIndicatorViewStyleWhiteLarge]; self.indicator = ind; [ind release]; [self.indicator startAnimating]; [self.alertView addSubview: self.indicator]; [self.alertView show]; } When I tap my the buy button this UIAlertView shows together with my UIActivityIndicator.. But when the transaction completes the alertView still on the top of the view and the Indicator was the only one that was removed. My question was how should I release the alertView? Or where/When should I release it. I've added these command to release my alertView and Indicator on these cases: case SKPaymentTransactionStatePurchased: case SKPaymentTransactionStateFailed: case SKPaymentTransactionStateRestored: [self.indicator stopAnimating]; [self.indicator removeFromSuperview]; [self.alertView release]; [self.indicator release]; I've only added the alertView to show that the purchasing was still in progress. Any suggestion to create any feedback to users will be thankful for me.. Thanks

    Read the article

  • Is it possible to have anonymous purchases with ubercart without the creation of a new user account?

    - by DKinzer
    I would like to be able to have anonymous users purchase a product but not have a new account created when they purchase it. Unfortunately the creation of a new user seems to be very tightly integrated into ubercart's ordering system. And, because the order module is part of the ubercart core, it's behavior cannot be overridden easily. One possibility for overriding is the creation of a new user account by supplying ubercart with an bogus anonymous account: hook into hook_form_alter at $form_id == 'uc_cart_checkout_review_form' because this is where ubercart first associates the $order to an uid. Add our submit function to the queue: //Find out if the user is anonymous: global $user; if ($user->uid == 0 ) { //Load a previously created anonymous user account $anonymous_user = mymodule_get_anonymous_user(); //create the order and assign our anonymous_user_id to it $order = uc_order_load($_SESSION['cart_order']); $order->uid = $anonymous_user->uid; uc_order_save($order); //Assign the global user our anonymous user uid $user->uid = $anonymous_user->uid; } But what I really need is to be able to have an anonymous purchase without being forced to create a new account, this solution does not work for me. Apart from which, using this technique will automatically login the anonymous_user into our bogus_anonymous_user account. Which is definitely something I don't want. Is there a better non-duct-tape way around the creation of a new user account for anonymous purchases in ubercart?. AND FYI - at this point I'm kind of stuck with ubercart so I cannot use something else. Thanks! D

    Read the article

  • Windows Messages Bizarreness

    - by jameszhao00
    Probably just a gross oversight of some sort, but I'm not receiving any WM_SIZE messages in the message loop. However, I do receive them in the WndProc. I thought the windows loop gave messages out to WndProc? LRESULT CALLBACK WndProc( HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam ) { switch(message) { // this message is read when the window is closed case WM_DESTROY: { // close the application entirely PostQuitMessage(0); return 0; } break; case WM_SIZE: return 0; break; } printf("wndproc - %i\n", message); // Handle any messages the switch statement didn't return DefWindowProc (hWnd, message, wParam, lParam); } ... and now the message loop... while(TRUE) { // Check to see if any messages are waiting in the queue if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { // translate keystroke messages into the right format TranslateMessage(&msg); // send the message to the WindowProc function DispatchMessage(&msg); // check to see if it's time to quit if(msg.message == WM_QUIT) { break; } if(msg.message == WM_SIZING) { printf("loop - resizing...\n"); } } else { //do other stuff } }

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Proving that the distance values extracted in Dijkstra's algorithm is non-decreasing?

    - by Gail
    I'm reviewing my old algorithms notes and have come across this proof. It was from an assignment I had and I got it correct, but I feel that the proof certainly lacks. The question is to prove that the distance values taken from the priority queue in Dijkstra's algorithm is a non-decreasing sequence. My proof goes as follows: Proof by contradiction. Fist, assume that we pull a vertex from Q with d-value 'i'. Next time, we pull a vertex with d-value 'j'. When we pulled i, we have finalised our d-value and computed the shortest-path from the start vertex, s, to i. Since we have positive edge weights, it is impossible for our d-values to shrink as we add vertices to our path. If after pulling i from Q, we pull j with a smaller d-value, we may not have a shortest path to i, since we may be able to reach i through j. However, we have already computed the shortest path to i. We did not check a possible path. We no longer have a guaranteed path. Contradiction.

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • Radix Sort in Python [on hold]

    - by Steven Ramsey
    I could use some help. How would you write a program in python that implements a radix sort? Here is some info: A radix sort for base 10 integers is a based on sorting punch cards, but it turns out the sort is very ecient. The sort utilizes a main bin and 10 digit bins. Each bin acts like a queue and maintains its values in the order they arrive. The algorithm begins by placing each number in the main bin. Then it considers the ones digit for each value. The rst value is removed and placed in the digit bin corresponding to the ones digit. For example, 534 is placed in digit bin 4 and 662 is placed in the digit bin 2. Once all the values in the main bin are placed in the corresponding digit bin for ones, the values are collected from bin 0 to bin 9 (in that order) and placed back in the main bin. The process continues with the tens digit, the hundreds, and so on. After the last digit is processed, the main bin contains the values in order. Use randint, found in random, to create random integers from 1 to 100000. Use a list comphrension to create a list of varying sizes (10, 100, 1000, 10000, etc.). To use indexing to access the digits rst convert the integer to a string. For this sort to work, all numbers must have the same number of digits. To zero pad integers with leading zeros, use the string method str.zfill(). Once main bin is sorted, convert the strings back to integers. I'm not sure how to start this, Any help is appreciated. Thank you.

    Read the article

  • Setting up Mercurial/TortoiseHg to work with UltraCompare

    - by Tim Pietzcker
    Hi, I'm trying to get my favorite Windows diff/merge tool, UltraCompare (V7.00) to work with Mercurial/TortoiseHg. I have set up UltraCompare in my Mercurial.ini like this (only relevant bits shown): [merge-tools] UltraCompare.executable = C:\Programme\IDM Computer Solutions\UltraCompare\uc.com UltraCompare.args = $base $local $other UltraCompare.priority = 1 UltraCompare.gui = True UltraCompare.binary = True UltraCompare.checkconflicts = True UltraCompare.checkchanged = True However, the three-way-merge fails. The path names get messed up if the path to the repository that is being merged to contains a space. I have done some more testing, and I've found out (using Process Explorer) that uc.com is called with a broken command line if there is a space in the repository's path: Compare "C:\Programme\IDM Computer Solutions\UltraCompare\uc.exe" " "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.akr6au" "E:\Eigene Dateien\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.b92442" and "C:\Programme\IDM Computer Solutions\UltraCompare\uc.com" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.e7vryp" "E:\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.u_qxme" There is an extraneous " after the path of the executable in the first example - not in the second (which works fine). To me, it seems as if UltraCompare is doing everything right, and that Mercurial/TortoiseHg are passing a defective command line to it. Would you say so, too? Is there a workaround? I've just updated to Mercurial 1.5/TortoiseHg 1.0, and the problem persists. Support for other merge tools (Beyond Compare and others) has been added, sadly not UltraCompare...

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • fitnesse test framework, arbitrary properties for test and queries/test runs based on them?

    - by Marcel
    hi, our testers have the requirement to store multiple properties for a test that are not present in the "properties". e.g. they want to store priority, a description(not in the wiki page itself) and so on. they don't want to use the tagging mechanism. is there a way to store any kind of new xml node in the properties.xml for a test? these properties should then be used to: query the fields via the search screen run tests based on the "SuiteResponder" ?suite=xxx&TAGx=abc&TAGy=cde they should be returned by "?properties" responder. they should appear in the test history of the test run in essence they want to store any kind of "meta" information in the properties.xml and work with them in all kinds of ways, search, run etc. does anybody here know if there is already something available in that direction? if not i think we have to "pimp" these features into fitnesse to make our testers happy. thanks a lot any help appreciated marcel ps: i've also posted the question in the yahoo fitnesse group

    Read the article

  • Passing Variable Length Arrays to a function

    - by David Bella
    I have a variable length array that I am trying to pass into a function. The function will shift the first value off and return it, and move the remaining values over to fill in the missing spot, putting, let's say, a -1 in the newly opened spot. I have no problem passing an array declared like so: int framelist[128]; shift(framelist); However, I would like to be able to use a VLA declared in this manner: int *framelist; framelist = malloc(size * sizeof(int)); shift(framelist); I can populate the arrays the same way outside the function call without issue, but as soon as I pass them into the shift function, the one declared in the first case works fine, but the one in the second case immediately gives a segmentation fault. Here is the code for the queue function, which doesn't do anything except try to grab the value from the first part of the array... int shift(int array[]) { int value = array[0]; return value; } Any ideas why it won't accept the VLA? I'm still new to C, so if I am doing something fundamentally wrong, let me know.

    Read the article

  • Singleton Issues in iOS http client

    - by Andrew Lauer Barinov
    I implemented an HTTP client is iOS that is meant to be a singleton. It is a subclass of AFNetworking's excellent AFHTTPClient My access method is pretty standard: + (LBHTTPClient*) sharedHTTPClient { static dispatch_once_t once = 0; static LBHTTPClient *sharedHTTPClient = nil; dispatch_once(&once, ^{ sharedHTTPClient = [[LBHTTPClient alloc] initHTTPClient]; }); return sharedHTTPClient; } // my init method - (id) initHTTPClient { self = [super initWithBaseURL:[NSURL URLWithString:kBaseURL]]; if (self) { // Sub class specific initializations } return self; } Super's init method is very long, and can be found here However the problem I experience is when the dispatch_once queue is run, the app locks and become unresponsive. When I step through code, I notice that it locks on dispatch_once and then remains frozen. Is this something to do with the main thread locking down? How come it stays locked like that? Also, none of the HTTP client methods fire. Stepping through the code some more, I find this line in dispatch/once.h is where the app locks down: DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW void _dispatch_once(dispatch_once_t *predicate, dispatch_block_t block) { if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) { dispatch_once(predicate, block); // App locks here } }

    Read the article

  • Overriding Node Paths

    - by fighella
    Can a PAGE override the priority of a NODE? I want to use the power of "Pages" to override my links from the nodes... For example: A nodes link is /content/202/hello-world I want to use my Panels to use the URL to create a "PAGE"... The panels can use the arguments from the URL to create a pretty cool page around the content of a node... So the PAGE path I've made is /content/%argument1/%title... I need the links created to the node from the node itself to go to this panel page and not to the content created by the node on its own... so i make the path alias do the same as the PAGE settings... /content/nid/title... A hand typed in link to this PAGE works fine when I dont have this as the path alias... It does exactly what I need... but as soon as I make this the Path Alias, it's like it goes to the single NODE before it goes to the PAGE... Anyone have any clues... Is there an order which Drupal looks for the correct URL? Im sure I have done this before. JD

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • Closing Connections on asynchronous messaging in JMS

    - by The Elite Gentleman
    Hi Everyone! I have created a JMS wrapper (similar to Springs JmsTemplate since I'm not using Springs) and I was wondering: If I setup asynchronous messaging, when is a good time to close connections and JMS relates resources (so that the Connection Factory in the Resource Pool can be available)? Thanks Here's the source code for receiving JMS messages public Message receive() throws JMSException { QueueConnection connection = null; QueueSession session = null; QueueReceiver consumer = null; try { // TODO Auto-generated method stub connection = factory.createQueueConnection(); if (connection != null && getExceptionListener() != null) { connection.setExceptionListener(getExceptionListener()); } session = connection.createQueueSession(isSessionTransacted(), getAcknowledgeMode()); consumer = session.createReceiver(queue); if (getMessageListener() != null) { consumer.setMessageListener(getMessageListener()); } //Begin connection.start(); if (getMessageListener() == null) { return null; } return receive(session, consumer); } catch (JMSException e) { // TODO: handle exception logger.error(e.getLocalizedMessage(), e); throw e; } finally { JMSUtil.closeMessageConsumer(consumer); JMSUtil.closeSession(session, false); //false = don't commit. JMSUtil.closeConnection(connection, true); //true = stop before close. } As you can see, if getMessageListener() != null then apply it to the MessageConsumer. Am I doing this correctly? The same approach has also been taken for JMS Topic.

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • XSD any element

    - by ofer shwartz
    HI! I'm trying to create a list that some of the elements are defined and some are not, without priority to order. I tried it this way, with an any element: <?xml version="1.0"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="object" mixed="true"> <xs:choice> <xs:element name="value" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:integer"> <xs:enumeration value="1"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:any namespace="##any" processContents="skip"/> </xs:choice> </xs:complexType> <xs:element name="object" type="object"/> </xs:schema> And it tells me this error: :0:0: error: complex type 'object' violates the unique particle attribution rule in its components 'value' and '##any' Can someone help me out solve the problem? Ofer

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • Double buffering C#

    - by LmSNe
    Hey, I'm trying to implement the following method: void Ball::DrawOn(Graphics g); The method should draw all previous locations(stored in a queue) of the ball and finally the current location. I don't know if that matters, but I print the previous locations using g.DrawEllipse(...) and the current location using g.FillEllipse(...). The question is, that as you could imagine there is a lot of drawing to be done and thus the display starts to flicker much. I had searched for a way to double buffer, but all I could find is these 2 ways: 1) System.Windows.Forms.Control.DoubleBuffered = true; 2) SetStyle(ControlStyles.DoubleBuffer | ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint, true); while trying to use the first, I get the an error explaining that from in this method the Property DoubleBuffered is inaccessible due to its protection level. While I can't figure how to use the SetStyle method. Is it possible at all to double buffer while all the access I have is to the Graphics Object I get as input in the method? Thanks in Advance,

    Read the article

  • BinaryWriterWith7BitEncoding & BinaryReaderWith7BitEncoding

    - by Tim
    Mr. Ayende wrote in his latest blog post about an implementation of a queue. In the post he's using two magical files: BinaryWriterWith7BitEncoding & BinaryReaderWith7BitEncoding BinaryWriterWith7BitEncoding can write both int and long? using the following method signatures: void WriteBitEncodedNullableInt64(long? value) & void Write7BitEncodedInt(int value) and BinaryReaderWith7BitEncoding can read the values written using the following method signatures: long? ReadBitEncodedNullableInt64() and int Read7BitEncodedInt() So far I've only managed to find a way to read the 7BitEncodedInt: protected int Read7BitEncodedInt() { int value = 0; int byteval; int shift = 0; while(((byteval = ReadByte()) & 0x80) != 0) { value |= ((byteval & 0x7F) << shift); shift += 7; } return (value | (byteval << shift)); } I'm not too good with byte shifting - does anybody know how to read and write the 7BitEncoded long? and write the int ?

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >