Search Results

Search found 8224 results on 329 pages for 'sometimes'.

Page 276/329 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • Snow Leopard mounted directory changes permissions sporadically

    - by Galen
    I have a smb mounted directory: /Volumes/myshare This was mounted via Finder "Connect to Server..." with smb://myservername/myshare Everything good so far. However, when I try to access the directory via PHP (running under Apache), it fails with permission denied about 10% of the time. By this I mean that repeated accesses to my page sometimes result in a failure. My PHP page looks like: <?php $cmd = "ls -la /Volumes/ 2>&1"; exec($cmd, $execOut, $exitCode); echo "<PRE>EXIT CODE = $exitCode<BR/>"; foreach($execOut as $line) { echo "$line <BR/>"; } echo "</PRE>"; ?> When it succeeds it looks like: EXIT CODE = 0 total 40 drwxrwxrwt@ 4 root admin 136 Jun 14 12:34 . drwxrwxr-t 30 root admin 1088 Jun 4 13:09 .. drwx------ 1 galen staff 16384 Jun 14 09:28 myshare lrwxr-xr-x 1 root admin 1 Jun 11 16:05 galenhd - / When it fails it looks like: EXIT CODE = 1 ls: myshare: Permission denied total 8 drwxrwxrwt@ 4 root admin 136 Jun 14 12:34 . drwxrwxr-t 30 root admin 1088 Jun 4 13:09 .. lrwxr-xr-x 1 root admin 1 Jun 11 16:05 galenhd - / OTHER INFO: I'm working with the PHP (5.3.1), and Apache server that comes out of the box with Snow Leopard. Also, if I write a PHP script that loops and retries the "ls -la.." from the command-line, it doesn't seem to fail. Nothing is changing about the code and/or filesystem between succeeds and fails, so this appears to be a truly intermittent failure. This is driving me crazy. Anyone have any idea what might be going on? Thanks, Galen

    Read the article

  • Same session and session ID for different subdomains in Grails project - How can I do that?

    - by fluxon
    I am currently working on a project that supports multiple languages. In order to be seo friendly, I am trying to redirect users subdomains corresponding to their locale (or their preferred language). I.e., my projects's url is mydomain.com and I work with the subdomains en.mydomain.com, es.mydomain.com, de.mydomain.com, fr.mydomain.com ... you get the idea. All subdomains are served by the same grails app for now. What happens is that my grails project maintains different sessions (as seen by the session ids) for every single subdomain, hence information is lost, when a user switches between languages. I had not forseen that. :( How can I explicitly set the session identifier? I would like it to be based on just mydomain.com. I got the hint that Apache Tomcat offers something like <Context sessionCookiePath="/" sessionCookieDomain=".mydomain.com"> , but that does not help for the devel environment etc. Any hints? Have you tried storing session information in the DB? This is sometimes used for load-balancing purposes and might help here as well?! Help is highly appreciated (as always)! Cheers!

    Read the article

  • Can I force the auto-generated Linq-to-SQL classes to use an OUTER JOIN?

    - by Gary McGill
    Let's say I have an Order table which has a FirstSalesPersonId field and a SecondSalesPersonId field. Both of these are foreign keys that reference the SalesPerson table. For any given order, either one or two salespersons may be credited with the order. In other words, FirstSalesPersonId can never be NULL, but SecondSalesPersonId can be NULL. When I drop my Order and SalesPerson tables onto the "Linq to SQL Classes" design surface, the class builder spots the two FK relationships from the Order table to the SalesPerson table, and so the generated Order class has a SalesPerson field and a SalesPerson1 field (which I can rename to SalesPerson1 and SalesPerson2 to avoid confusion). Because I always want to have the salesperson data available whenever I process an order, I am using DataLoadOptions.LoadWith to specify that the two salesperson fields are populated when the order instance is populated, as follows: dataLoadOptions.LoadWith<Order>(o => o.SalesPerson1); dataLoadOptions.LoadWith<Order>(o => o.SalesPerson2); The problem I'm having is that Linq to SQL is using something like the following SQL to load an order: SELECT ... FROM Order O INNER JOIN SalesPerson SP1 ON SP1.salesPersonId = O.firstSalesPersonId INNER JOIN SalesPerson SP2 ON SP2.salesPersonId = O.secondSalesPersonId This would make sense if there were always two salesperson records, but because there is sometimes no second salesperson (secondSalesPersonId is NULL), the INNER JOIN causes the query to return no records in that case. What I effectively want here is to change the second INNER JOIN into a LEFT OUTER JOIN. Is there a way to do that through the UI for the class generator? If not, how else can I achieve this? (Note that because I'm using the generated classes almost exclusively, I'd rather not have something tacked on the side for this one case if I can avoid it).

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • Proper way to scan a range of IP addresses

    - by Josh G
    Given a range of IP addresses entered by a user (through various means), I want to identify which of these machines have software running that I can talk to. Here's the basic process: Ping these addresses to find available machines Connect to a known socket on the available machines Send a message to the successfully established sockets Compare the response to the expected response Steps 2-4 are straight forward for me. What is the best way to implement the first step in .NET? I'm looking at the System.Net.NetworkInformation.Ping class. Should I ping multiple addresses simultaneously to speed up the process? If I ping one address at a time with a long timeout it could take forever. But with a small timeout, I may miss some machines that are available. Sometimes pings appear to be failing even when I know that the address points to an active machine. Do I need to ping twice in the event of the request getting discarded? To top it all off, when I scan large collections of addresses with the network cable unplugged, Ping throws a NullReferenceException in FreeUnmanagedResources(). !? Any pointers on the best approach to scanning a range of IPs like this?

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • Element Content Versus Attribute for Simple XML Value

    - by MB
    I know the elements versus attributes debate has come up many times here and elsewhere (e.g. here, here, here, here, and here) but I haven't seen much discussion of elements versus attributes for simple property values. So which of the following approaches do you think is better for storing a simple value? A: Value in Element Content: <TotalCount>553</TotalCount> <CelsiusTemperature>23.5</CelsiusTemperature> <SingleDayPeriod>2010-05-29</SingleDayPeriod> <ZipCodeLocation>12203</ZipCodeLocation> or B: Value in Attribute: <TotalCount value="553"/> <CelsiusTemperature value="23.5"/> <SingleDayPeriod day="2010-05-29"/> <ZipCodeLocation code="12203"/> I suspect that putting the value in the element content (A) might look a little more familiar to most folks (though I'm not sure about that). Putting the value in an attribute (B) might use less characters, but that depends on the length of the element and attribute names. Putting the value in an attribute (B) might be more extensible, because you could potentially include all sorts of extra information as nested elements. Whereas, by putting the value inside the element content (A), you're restricting extensibility to adding more attributes. But then extensibility often isn't a concern for really simple properties - sometimes you know that you'll never need to add additional data. Bottom line might be that it simply doesn't matter, but it would still be great to hear some thoughts and see some votes for the two options.

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Which templating languages output HTML *as a tree of nodes*?

    - by alamar
    HTML is a tree of nodes, before all. It's not just a text. However, most templating engines handle their input and output as it was just a text; they don't care what happens around their tags, their {$foo}'s and <% bar() %>'s; also they don't care about what are they outputting. Sometimes they happen to produce a correct html, but that's just a coincidence; they didn't aim for that, all they wanted is to replace some funny marks in the text stream with their evaluation. There are a few templating engines which do treat their output as a set of nodes; XSLT and Haml come to mind. For some tasks, this has advantages: for example, you can automatically reformat (like, delete all empty text nodes; auto-indent; word-wrap). The result is guaranteed to be a correct xml/sgml unless you use a strict subset of operations that can break that. Also, such templating engine would automatically quote strings, differently in text nodes and in attributes, because it strictly knows whether you're writing an attribute or a text node. Moreover, it can conditionally remove a node from output because it knows where it does begin and end, which is useful, and do other non-trivial node operations. You might not like XSLT for its verbosiness or functionalness, but it's damn helps that your template is xmllint-able XML, and your output is a good sgml/xml. So the question is: Which template engines do you know that treat their output as a set of correct nodes, not just an unstructured text? I know XSLT, Haml and some obscure python-based one. Moar!

    Read the article

  • Time with and without OpenMP

    - by was
    I have a question.. I tried to improve a well known program algorithm in C, FOX algorithm for matrix multiplication.. relative link without openMP: (http://web.mst.edu/~ercal/387/MPI/ppmpi_c/chap07/fox.c). The initial program had only MPI and I tried to insert openMP in the matrix multiplication method, in order to improve the time of computation: (This program runs in a cluster and computers have 2 cores, thus I created 2 threads.) The problem is that there is no difference of time, with and without openMP. I observed that using openMP sometimes, time is equivalent or greater than the time without openMP. I tried to multiply two 600x600 matrices. void Local_matrix_multiply( LOCAL_MATRIX_T* local_A /* in */, LOCAL_MATRIX_T* local_B /* in */, LOCAL_MATRIX_T* local_C /* out */) { int i, j, k; chunk = CHUNKSIZE; // 100 #pragma omp parallel shared(local_A, local_B, local_C, chunk, nthreads) private(i,j,k,tid) num_threads(2) { /* tid = omp_get_thread_num(); if(tid == 0){ nthreads = omp_get_num_threads(); printf("O Pollaplasiamos pinakwn ksekina me %d threads\n", nthreads); } printf("Thread %d use the matrix: \n", tid); */ #pragma omp for schedule(static, chunk) for (i = 0; i < Order(local_A); i++) for (j = 0; j < Order(local_A); j++) for (k = 0; k < Order(local_B); k++) Entry(local_C,i,j) = Entry(local_C,i,j) + Entry(local_A,i,k)*Entry(local_B,k,j); } //end pragma omp parallel } /* Local_matrix_multiply */

    Read the article

  • Javascript onclick() event bubbling - working or not?

    - by user1071914
    I have a table in which the table row tag is decorated with an onclick() handler. If the row is clicked anywhere, it will load another page. In one of the elements on the row, is an anchor tag which also leads to another page. The desired behavior is that if they click on the link, "delete.html" is loaded. If they click anywhere else in the row, "edit.html" is loaded. The problem is that sometimes (according to users) both the link and the onclick() are fired at once, leading to a problem in the back end code. They swear they are not double-clicking. I don't know enough about Javascript event bubbling, handling and whatever to even know where to start with this bizarre problem, so I'm asking for help. Here's a fragment of the rendered page, showing the row with the embedded link and associated script tag. Any suggestions are welcomed: <tr id="tableRow_3339_0" class="odd"> <td class="l"></td> <td>PENDING</td> <td>Yabba Dabba Doo</td> <td>Fred Flintstone</td> <td> <a href="/delete.html?requestId=3339"> <div class="deleteButtonIcon"></div> </a> </td> <td class="r"></td> </tr> <script type="text/javascript">document.getElementById("tableRow_3339_0").onclick = function(event) { window.location = '//edit.html?requestId=3339'; };</script>

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • How to maintain long-lived python projects w.r.t. dependencies and python versions ?

    - by Gyom
    short version: how can I get rid of the multiple-versions-of-python nightmare ? long version: over the years, I've used several versions of python, and what is worse, several extensions to python (e.g. pygame, pylab, wxPython...). Each time it was on a different setup, with different OSes, sometimes different architectures (like my old PowerPC mac). Nowadays I'm using a mac (OSX 10.6 on x86-64) and it's a dependency nightmare each time I want to revive script older than a few months. Python itself already comes in three different flavours in /usr/bin (2.5, 2.6, 3.1), but I had to install 2.4 from macports for pygame, something else (cannot remember what) forced me to install all three others from macports as well, so at the end of the day I'm the happy owner of seven (!) instances of python on my system. But that's not the problem, the problem is, none of them has the right (i.e. same set of) libraries installed, some of them are 32bits, some 64bits, and now I'm pretty much lost. For example right now I'm trying to run a three-year-old script (not written by me) which used to use matplotlib/numpy to draw a real-time plot within a rectangle of a wxwidgets window. But I'm failing miserably: py26-wxpython from macports won't install, stock python has wxwidgets included but also has some conflict between 32 bits and 64 bits, and it doesn't have numpy... what a mess ! Obviously, I'm doing things the wrong way. How do you usally cope with all that chaos ?

    Read the article

  • How to properly set JavaMail timeout setting

    - by user286149
    I am using JavaMail to connect to a POP3 server. Further, I set the following properties, so that JavaMail won't wait to long if an email server doesn't respond: props.setProperty("mail.pop3.connectionpooltimeout", "3000"); props.setProperty("mail.pop3.connectiontimeout", "3000"); props.setProperty("mail.pop3.timeout", "3000"); However, in some cases the timeout works properly but sometimes JavaMail freezes for minutes(!) with the following debug message: DEBUG POP3: connecting to host "pop3.yahoo.com", port 110, isSSL false Changing ports or protocols (SSL, TLS..) has no effect. I assume that the host simply doesn't exist. For example, if I poll pop3.yahoo.com instead of pop.mail.yahoo.com (which would be the right host name), I have to wait very long til a timeout exception occurs. After several minutes, I get the following exception and the application continues to run: java.net.ConnectException: Operation timed out pop3.yahoo.com seems to exist but won't respond: localhost:~ me$ ping pop3.yahoo.com PING pop3.yahoo.com (206.190.46.10): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 Request timeout for icmp_seq 2 Request timeout for icmp_seq 3 ^C You might be asking why I use pop3.yahoo.com instead of pop.mail.yahoo.com. Well, I simply wanted to test what happens if the user of my application inserts a wrong host name. I believe that this issue is related to this report http://www.opensubscriber.com/message/[email protected]/180946.html where the poster claims that the problem occurs if the email server closes the connection. JavaMail then seems to wait very long (don't know why). Since the issue wasn't resolved in the link I posted: Does somebody know how to fix or at least debug this? Any help would be really appreciated!

    Read the article

  • How can I help fellow students struggling in programming classes?

    - by David Barry
    I'm a computer science student finishing up my second semester of programming classes. I've enjoyed them quite a bit, and learned a lot, but it seems other students are struggling with the concepts and assignments more than I am. When an assignment is due, the inevitable group email comes out the day or two before with people needing some help either with a specific part of the problem, or sometimes people just seem to have a hard time knowing where to start. I'd really like to be able to help out, but I have a hard time thinking of the right way to give them help without giving them the answer. When I'm having trouble understanding a concept, a code snippet can go along way to helping me, but at the same time if it makes a lot of sense, it can be difficult to think of another way to go about it. Plus the Academic Integrity section of each assignment is always looming overhead warning against sharing code with others. I've tried using pseudo code to help give others an idea on program flow, leaving them to figure out how to implement certain aspects of it, but I didn't get too much feedback and don't know how much it actually helped them out, or if it just confused them further. So I'm basically looking to see if anyone has experience with this, or good ways that I can help out other students to nudge them in the right direction or help them think about the problem in the right way.

    Read the article

  • How to find out where a thread lock happend?

    - by SchlaWiener
    One of our company's Windows Forms application had a strange problem for several month. The app worked very reliable for most of our customers but on some PC's (mostly with a wireless lan connection) the app sometimes just didn't respond anymore. (You click on the UI and windows ask you to wait or kill the app). I wasn't able to track down the problem for a long time but now I figured out what happend. The app had this line of code // don't blame me for this. Wasn't my code :D Control.CheckForIllegalCrossThreadCalls = false and used some background threads to modify the controls. No I found a way to reproduce the application stopping responding bug on my dev machine and tracked it down to a line where I actually used Invoke() to run a task in the main thread. Me.Invoke(MyDelegate, arg1, arg2) Obviously there was a thread lock somewhere. After removing the Control.CheckForIllegalCrossThreadCalls = false statement and refactoring the whole programm to use Invoke() if modifying a control from a background thread, the problem is (hopefully) gone. However, I am wondering if there is a way to find such bugs without debugging every line of code (Even if I break into debugger after the app stops responding I can't tell what happend last, because the IDE didn't jump to the Invoke() statement) In other words: If my apps hangs how can I figure out which line of code has been executed last? Maybe even on the customers PC. I know VS2010 offers some backwards debugging feature, maybe that would be a solution, but currently I am using VS2008.

    Read the article

  • jquery callback functions failing to finish execution

    - by calumbrodie
    I'm testing a jquery app i've written and have come across some unexpected behaviour $('button.clickme').live('click',function(){ //do x (takes 2 seconds) //do y (takes 4 seconds) //do z (takes 0.5 seconds) }) The event can be triggered by a number of buttons. What I'm finding is that when I click each button slowly (allowing 10 seconds between clicks) - my callback function executes correctly (actions x, y & z complete). However If I rapidly click buttons on my page it appears that the function sometimes only completes up to step x or y before terminating. My question: Is it the case that if this function is fired by a clicking second DOM element, while the first callback function is completing - will jQuery terminate the first callback and start over again? Do I have to write my callback function explicitly outside the event handler and then call it?? function doStuff() { //do x //do y //do z ( } $('button.clickme).live('click',doStuff()) If this is the case can someone explain why this is happening or give me a link to some advice on best practice on closures etc - I'd like to know the BEST way to write jQuery to improve performance etc. Thanks

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • How do I wait for a C# event to be raised?

    - by Evan Barkley
    I have a Sender class that sends a Message on a IChannel: public class MessageEventArgs : EventArgs { public Message Message { get; private set; } public MessageEventArgs(Message m) { Message = m; } } public interface IChannel { public event EventHandler<MessageEventArgs> MessageReceived; void Send(Message m); } public class Sender { public const int MaxWaitInMs = 5000; private IChannel _c = ...; public Message Send(Message m) { _c.Send(m); // wait for MaxWaitInMs to get an event from _c.MessageReceived // return the message or null if no message was received in response } } When we send messages, the IChannel sometimes gives a response depending on what kind of Message was sent by raising the MessageReceived event. The event arguments contain the message of interest. I want Sender.Send() method to wait for a short time to see if this event is raised. If so, I'll return its MessageEventArgs.Message property. If not, I return a null Message. How can I wait in this way? I'd prefer not to have do the threading legwork with ManualResetEvents and such, so sticking to regular events would be optimal for me.

    Read the article

  • Dealing with rapid tapping on Buttons

    - by Eric Burke
    I have a Button with an OnClickListener. For illustrative purposes, consider a button that shows a modal dialog: public class SomeActivity ... { protected void onCreate(Bundle state) { super.onCreate(state); findViewById(R.id.ok_button).setOnClickListener( new View.OnClickListener() { public void onClick(View v) { // This should block input new AlertDialog.Builder(SomeActivity.this) .setCancelable(true) .show(); } }); } Under normal usage, the alert dialog appears and blocks further input. Users must dismiss the dialog before they can tap the button again. But sometimes the button's OnClickListener is called twice before the dialog appears. You can duplicate this fairly easily by tapping really fast on the button. I generally have to try several times before it happens, but sooner or later I'll trigger multiple onClick(...) calls before the dialog blocks input. I see this behavior in Android 2.1 on the Motorola Droid phone. We've received 4 crash reports in the Market, indicating this occasionally happens to people. Depending on what our OnClickListeners do, this causes all sorts of havoc. How can we guarantee that blocking dialogs actually block input after the first tap?

    Read the article

  • How do you keep application logic separate from UI when UI components have built-in functionality?

    - by Al C
    I know it's important to keep user interface code separated from domain code--the application is easier to understand, maintain, change, and (sometimes) isolate bugs. But here's my mental block ... Delphi comes with components with methods that do what I want, e.g., a RichText Memo component lets me work with rich text. Other components, like TMS's string grid not only do what I want, but I paid extra for the functionality. These features put the R in RAD. It seems illogical to write my own classes to do things somebody else has already done for me. It's reinventing the wheel [ever tried working directly with rich text? :-) ] But if I use the functionality built into components like these, then I will end up with lots of intermingled UI and domain code--I'll have a form with most of my code built into its event handlers. How do you deal with this issue? ... Or, if I want to continue using the code others have already written for me, how would you suggest I deal with the issue?

    Read the article

  • How do I implement configurations and settings?

    - by Malvolio
    I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default. So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live. Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples. The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.) What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?

    Read the article

  • Move camera to fit 3D scene

    - by Burre
    Hi there. I'm looking for an algorithm to fit a bounding box inside a viewport (in my case a DirectX scene). I know about algorithms for centering a bounding sphere in a orthographic camera but would need the same for a bounding box and a perspective camera. I have most of the data: I have the up-vector for the camera I have the center point of the bounding box I have the look-at vector (direction and distance) from the camera point to the box center I have projected the points on a plane perpendicular to the camera and retrieved the coefficients describing how much the max/min X and Y coords are within or outside the viewing plane. Problems I have: Center of the bounding box isn't necessarily in the center of the viewport (that is, it's bounding rectangle after projection). Since the field of view "skew" the projection (see http://en.wikipedia.org/wiki/File:Perspective-foreshortening.svg) I cannot simply use the coefficients as a scale factor to move the camera because it will overshoot/undershoot the desired camera position How do I find the camera position so that it fills the viewport as pixel perfect as possible (exception being if the aspect ratio is far from 1.0, it only needs to fill one of the screen axis)? I've tried some other things: Using a bounding sphere and Tangent to find a scale factor to move the camera. This doesn't work well, because, it doesn't take into account the perspective projection, and secondly spheres are bad bounding volumes for my use because I have a lot of flat and long geometries. Iterating calls to the function to get a smaller and smaller error in the camera position. This has worked somewhat, but I can sometimes run into weird edge cases where the camera position overshoots too much and the error factor increases. Also, when doing this I didn't recenter the model based on the position of the bounding rectangle. I couldn't find a solid, robust way to do that reliably. Help please!

    Read the article

  • How is this Perl code selecting two different elements from an array?

    - by Mike
    I have inherited some code from a guy whose favorite past time was to shorten every line to its absolute minimum (and sometimes only to make it look cool). His code is hard to understand but I managed to understand (and rewrite) most of it. Now I have stumbled on a piece of code which, no matter how hard I try, I cannot understand. my @heads = grep {s/\.txt$//} OSA::Fast::IO::Ls->ls($SysKey,'fo','osr/tiparlo',qr{^\d+\.txt$}) || (); my @selected_heads = (); for my $i (0..1) { $selected_heads[$i] = int rand scalar @heads; for my $j (0..@heads-1) { last if (!grep $j eq $_, @selected_heads[0..$i-1]); $selected_heads[$i] = ($selected_heads[$i] + 1) % @heads; #WTF? } my $head_nr = sprintf "%04d", $i; OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].txt","$recdir/heads/$head_nr.txt"); OSA::Fast::IO::Cp->cp($SysKey,'',"osr/tiparlo/$heads[$selected_heads[$i]].cache","$recdir/heads/$head_nr.cache"); } From what I can understand, this is supposed to be some kind of randomizer, but I never saw a more complex way to achieve randomness. Or are my assumptions wrong? At least, that's what this code is supposed to do. Select 2 random files and copy them. === NOTES === The OSA Framework is a Framework of our own. They are named after their UNIX counterparts and do some basic testing so that the application does not need to bother with that.

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >