Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 215/254 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Does Interlocked guarantee visibility to other threads in C# or do I still have to use volatile?

    - by Lirik
    I've been reading the answer to a similar question, but I'm still a little confused... Abel had a great answer, but this is the part that I'm unsure about: ...declaring a variable volatile makes it volatile for every single access. It is impossible to force this behavior any other way, hence volatile cannot be replaced with Interlocked. This is needed in scenarios where other libraries, interfaces or hardware can access your variable and update it anytime, or need the most recent version. Does Interlocked guarantee visibility of the atomic operation to all threads, or do I still have to use the volatile keyword on the value in order to guarantee visibility of the change? Here is my example: public class CountDownLatch { private volatile int m_remain; // <--- do I need the volatile keyword there since I'm using Interlocked? private EventWaitHandle m_event; public CountDownLatch (int count) { Reset(count); } public void Reset(int count) { if (count < 0) throw new ArgumentOutOfRangeException(); m_remain = count; m_event = new ManualResetEvent(false); if (m_remain == 0) { m_event.Set(); } } public void Signal() { // The last thread to signal also sets the event. if (Interlocked.Decrement(ref m_remain) == 0) m_event.Set(); } public void Wait() { m_event.WaitOne(); } }

    Read the article

  • jQuery slideDown + CSS Floats

    - by danilo
    I'm using a HTML Table with several rows. Every second row - containing details about the preceding row - is hidden using CSS. When clicking the first row, the second row gets showed using jQuery show(). This is quite nice, but I would prefer the slideDown-Effect. The problem is, inside the details row, there are two floating DIVs, one floating on the left, and one on the right. Now if i slideDown the previously hidden row, the contained DIVs behave strange and "jump around". See this animated gif to understand what I mean: http://ich-wars-nicht.ch/tmp/lunapic_127365879362365_.gif The markup: <tr class="row-vm"> <td>...</td> ... </tr> <tr class="row-details"> <td colspan="8"> <div class="vmdetail-left"> ... </div> <div class="vmdetail-right"> ... </div> </td> </tr> The CSS: .table-vmlist tr.row-details { display: none; } div.vmdetail-left { float: left; width: 50%; } div.vmdetail-right { float: right; width: 50%; } And the jQuery code: if ($(this).next().css('display') == 'none') { // Show details //$(this).next().show(); $(this).next().slideDown(); } else { // Hide details //$(this).next().hide(); $(this).next().slideUp(); } Is there a way to fix this behavior, and to implement a nice slideDown-effect?

    Read the article

  • What is the purpose of the Html "no-js" class?

    - by Swader
    I notice that in a lot of template engines, in the HTML5 Boilerplate, in various frameworks and in plain php sites there is the no-js class added onto the html element. Why is this done? Is there some sort of default browser behavior that reacts to this class? Why include it always? Does that not render the class itself obsolete, if there is no no-"no-js" case and html can be addressed directly? Here is an example from the HTML5 Boilerplate index.html: <!--[if lt IE 7 ]> <html lang="en" class="no-js ie6"> <![endif]--> <!--[if IE 7 ]> <html lang="en" class="no-js ie7"> <![endif]--> <!--[if IE 8 ]> <html lang="en" class="no-js ie8"> <![endif]--> <!--[if IE 9 ]> <html lang="en" class="no-js ie9"> <![endif]--> <!--[if (gt IE 9)|!(IE)]><!--> <html lang="en" class="no-js"> <!--<![endif]--> As you can see, the html element will always have this class. Can someone explain why this is done so often?

    Read the article

  • What are the default return values for operator< and operator[] in C++ (Visual Studio 6)?

    - by DustOff
    I've inherited a large Visual Studio 6 C++ project that needs to be translated for VS2005. Some of the classes defined operator< and operator[], but don't specify return types in the declarations. VS6 allows this, but not VS2005. I am aware that the C standard specifies that the default return type for normal functions is int, and I assumed VS6 might have been following that, but would this apply to C++ operators as well? Or could VS6 figure out the return type on its own? For example, the code defines a custom string class like this: class String { char arr[16]; public: operator<(const String& other) { return something1 < something2; } operator[](int index) { return arr[index]; } }; Would VS6 have simply put the return types for both as int, or would it have been smart enough to figure out that operator[] should return a char and operator< should return a bool (and not convert both results to int all the time)? Of course I have to add return types to make this code VS2005 C++ compliant, but I want to make sure to specify the same type as before, as to not immediately change program behavior (we're going for compatibility at the moment; we'll standardize things later).

    Read the article

  • Cannot pass null to server using jQuery AJAX. Value received at the server is the string "null".

    - by Tom
    I am converting a javascript/php/ajax application to use jQuery to ensure compatibility with browsers other than Firefox. I am having trouble passing true, false, and null values using jQuery's ajax function. Javascript code: $.ajax ( { url : <server_url>, dataType: 'json', type : 'POST', success : receiveAjaxMessage, data: { valueTrue : true, valueFalse : false, valueNull : null } } ); PHP code: var_dump($_POST); Server output: array(3) { ["valueTrue"]=> string(4) "true" ["valueFalse"]=> string(5) "false" ["valueNull"]=> string(4) "null" } The problem is that the null, true, and false values are being converted to strings. The Javascript AJAX code currently in use passes null, true, and false correctly but only works in Firefox. Does anyone know how to solve this problem using jQuery? Here is some working code (not using jQuery) to compare with the code not-working code given above. Javascript Code: ajaxPort.send ( <server_url>, { valueTrue : true, valueFalse : false, valueNull : null } ); PHP code: var_dump(json_decode(file_get_contents('php://input'), true)); Server output: array(3) { ["valueTrue"]=> bool(true) ["valueFalse"]=> bool(false) ["valueNull"]=> NULL } Note that the null, true, and false values are correctly received. Note also that in the second method the $_POST array is not used in the PHP code. I think this is the key to the problem, but I cannot find a way to replicate this behavior using jQuery.

    Read the article

  • How do I stop a bouncy JQuery animation?

    - by Miguel
    In a webapp I'm working on, I want to create some slider divs that will move up and down with mouseover & mouseout (respectively.) I currently have it implemented with JQuery's hover() function, by using animate() and reducing/increasing it's top css value as needed. This works fairly well, actually. The problem is that it tends to get stuck. If you move the mouse over it (especially near the bottom), and quickly remove it, it will slide up & down continuously and won't stop until it's completed 3-5 cycles. To me, it seems that the issue might have to do with one animation starting before another is done (e.g. the two are trying to run, so they slide back and forth.) Okay, now for the code. Here's the basic JQuery that I'm using: $('.slider').hover( /* mouseover */ function(){ $(this).animate({ top : '-=120' }, 300); }, /* mouseout*/ function(){ $(this).animate({ top : '+=120' }, 300); } ); I've also recreated the behavior in a JSFiddle. Any ideas on what's going on? :) ==EDIT== UPDATED JSFiddle

    Read the article

  • no such file to load -- for several gems unpacked in a Rails 2.3.8 app

    - by vincentp
    Hi, I unpacked several gems into the /vendor/gems folder, and I get the same error message for 5 of these gems when I try to start my Rails application. The date-performance one as an example : no such file to load -- date_performance.so /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `gem_original_require' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in `require' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:521:in `new_constants_in' /opt/ruby-enterprise-1.8.7-20090928/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:156:in `require' /path_to_my_app/vendor/gems/date-performance-0.4.8/lib/date/performance.rb:34 ... Here is the line 34 : require 'date_performance.so' I'm including the gem using the following code : config.gem "date-performance", :lib => "date/performance" The '.so' file is under /path_to_my_app/vendor/gems/date-performance-0.4.8/lib/ Any idea on why things were working while the gems were not unpacked? Do you have any idea about this behavior? I'm using : Rails 2.3.8 REE 1.8.7 gem 1.3.6 Mac OS X Thanks! Vincent

    Read the article

  • Catching exception in Main() method

    - by Corvin
    Consider the following simple application: a windows form created by a "new C# windows application" sequence in VS that was modified in a following way: public static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); try { Application.Run(new Form1()); } catch (Exception ex) { MessageBox.Show("An unexpected exception was caught."); } } Form1.cs contains the following modifications: private void Form1_Load(object sender, EventArgs e) { throw new Exception("Error"); } If I press F5 in IDE, then, as I expect, I see a message box saying that exception was caught and the application quits. If I go to Debug(or Release)/bin and launch the executable, I see the standard "Unhandled exception" window, meaning that my exception handler doesn't work. Obviously, that has something to do with exception being thrown from a different thread that Application.Run is called from. But the question remains - why the behavior differs depending on whether the application has been run from IDE or from command line? What is the best practice to ensure that no exceptions remain unhandled in the application?

    Read the article

  • CALayer Border is appearing above subview (Z-order related, I think)

    - by kurisukun
    I have searched but could not find the reason for this behavior. I have a UIButton whose image I am setting. Here is how the button should appear. Note that this is just a photoshop of the intended button design: Essentially, it is a square custom UIButton with a white border and a little surrounding shadow. In the upper right corner, there is a "X" mark, that will be added programmatically as a subview. Here is the screenshot of the button within the actual app. At this point, I have only added a shadow and the X mark as a subview: How, when I try to add the white border, here is what it looks like: It seems that the white border is appearing above the X mark sublayer. I don't know why. Here is the code that I am using: // selectedPhotoButton is the UIButton with UIImage set earlier // At this point, I am adding in the shadow [selectedPhotoButton layer] setShadowColor:[[UIColor lightGrayColor] CGColor]]; [[selectedPhotoButton layer] setShadowOffset: CGSizeMake(1.0f, 1.0f)]; [[selectedPhotoButton layer] setShadowRadius:0.5f]; [[selectedPhotoButton layer] setShadowOpacity:1.0f]; // Now add the white border [[selectedPhotoButton layer] setBorderColor:[[UIColor whiteColor] CGColor]]; [[selectedPhotoButton layer] setBorderWidth:2.0]; // Now add the X mark subview UIImage *deleteImage = [UIImage imageNamed:@"nocheck_photo.png"]; UIImageView *deleteMark = [[UIImageView alloc] initWithFrame:CGRectMake(53, -5, 27, 27)]; deleteMark.contentMode = UIViewContentModeScaleAspectFit; [deleteMark setImage:deleteImage]; [selectedPhotoButton addSubview:deleteMark]; [deleteMark release]; I don't understand why the border is appearing above the deleteMark subview. Is there any way to get the intended effect? Thank you!

    Read the article

  • wpf progress bar slows 10x times serial port communications... how could be possible that?

    - by D_Guidi
    I know that this could look a dumb question, but here's my problem. I have a worker dialog that "hides" a backgroundworker, so in a worker thread I do my job, I report the progress in a standard way and then I show the results in my WPF program. The dialog contains a simply animated gif and a standard wpf progress bar, and when a progress is notified I set Value property. All lokks as usual and works well for any kind of job, like web service calls, db queries, background elaboration and so on. For my job we use also many "couplers", card readers that reads data from smart card, that are managed with native C code that access to serial port (so, I don't use .NET SerialPort object). I have some nunit tests and I read a sample card in 10 seconds, but using my actual program, under the backgroundworker and showing my worker dialog, I need 1.30 minutes to do the SAME job. I struggled into problem for days until I decide to remove the worker dialog, and without dialog I obtain the same performances of the tests! So I investigated, and It's not the dialog, not the animated gif, but the wpf progress bar! Simply the fact that a progress bar is shown (so, no animation, no Value set called, nothing of nothing) slows serialport communicatitons. Looks incredible? I've tested this behavior and it's exactly what happens.

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • Question About Fk refrence in The Collection

    - by Ahmed
    Hi , i have 2 entities : ( person ) & (Address) with follwing mapping : <class name="Adress" table="Adress" lazy="false"> <id name="Id" column="Id"> <generator class="native" /> </id> <many-to-one name="Person" class="Person"> <column name="PersonId" /> </many-to-one> </class> <class name="Person" table="Person" lazy="false"> <id name="PersonId" column="PersonId"> <generator class="native" /> </id> <property name="Name" column="Name" type="String" not-null="true" /> <set name="Adresses" lazy="true" inverse="true" cascade="save-update"> <key> <column name="PersonId" /> </key> <one-to-many class="Adress" /> </set> </class> my propblem is that when i set Adrees.Person with new object of person ,The collection person.Adresses doesn't update itself . should i update every end role of the association to be updated in the two both? another thing : if i updated the Fk manually like this : Adress.PersonId it doesn't break or change association. does this is Nhibernte behavior ? thanks in advance , i am waiting for your experiencies

    Read the article

  • Refactor link to show/hide a table row

    - by abatishchev
    I have a table with row which cab be hidden by user. It's implemented this way: Markup: <table> <tr> <td> <table style="margin-left: auto; text-align: right;"> <tr> <td class="stats-hide"> <a href="#" onclick="hideStats();">Hide</a> </td> <td class="stats-show" style="display: none;"> <a href="#" onclick="showStats();">Show</a> </td> </tr> </table> </td> </tr> <tr class="stats-hide"> <td> <!-- data --> </td> </tr> </table> And jQuery code: <script type="text/javascript" language="javascript"> function hideStats() { hideControls(true, $('.stats-hide')); hideControls(false, $('.stats-show')); } function showStats() { hideControls(false, $('.stats-hide')); hideControls(true, $('.stats-show')); } function hideControls(value, arr) { $(arr).each(function () { if (value) { $(this).hide(); } else { $(this).show(); } }); } </script> How to implement the same behavior with one, single link and one, probably, CSS class? My idea - store somewhere a boolean variable and toggle controls visibility relatively to this variable. Are there more?

    Read the article

  • Custom UITableViewCell trouble with UIAccessibility elements

    - by ojreadmore
    No matter what I try, I can't keep my custom UITableViewCell from acting like it should under the default rules for UIAccessiblity. I don't want this cell to act like an accessibility container (per se), so following this guide I should be able to make all of my subviews accessible, right?! It says to make each element accessible separately and make sure the cell itself is not accessible. - (BOOL)isAccessibilityElement { return NO; } - (NSString *)accessibilityLabel { return nil; } - (NSInteger)accessibilityElementCount { return 0; } - (id)initWithStyle:(UITableViewCellStyle)style reuseIdentifier:(NSString *)reuseIdentifier //cells use this reusage stuff { if (self = [super initWithStyle:style reuseIdentifier:reuseIdentifier]) { [self setIsAccessibilityElement:NO]; sub1 = [[UILabel alloc] initWithFrame:CGRectMake(0,0,1,1)]; [sub1 setAccessibilityLanguage:@"es"]; [sub1 setIsAccessibilityElement:YES]; [sub1 setAccessibilityLabel:sub1.text] sub2 = [[UILabel alloc] initWithFrame:CGRectMake(0,0,1,1)]; [sub2 setAccessibilityLanguage:@"es"]; [sub2 setIsAccessibilityElement:YES]; [sub2 setAccessibilityLabel:sub2.text] The voice over system reads the contents of the whole cell all at once, even though I'm trying to stop that behavior. I could say [sub2 setIsAccessibilityElement:NO]; but that would would make this element entirely unreadable. I want to keep it readable, but not have the whole cell be treated like a container (and assumed to be the English language). There does not appear to be a lot of information out there on this, so at the very least I'd like to document it.

    Read the article

  • In Java, is there a gain in using interfaces for complex models?

    - by Gnoupi
    The title is hardly understandable, but I'm not sure how to summarize that another way. Any edit to clarify is welcome. I have been told, and recommended to use interfaces to improve performances, even in a case which doesn't especially call for the regular "interface" role. In this case, the objects are big models (in a MVC meaning), with many methods and fields. The "good use" that has been recommended to me is to create an interface, with its unique implementation. There won't be any other class implementing this interface, for sure. I have been told that this is better to do so, because it "exposes less" (or something close) to the other classes which will use methods from this class, as these objects are referring to the object from its interface (all public method from the implementation being reproduced in the interface). This seems quite strange to me, as it seems like a C++ use to me (with header files). There I see the point, but in Java? Is there really a point in making an interface for such unique implementation? I would really appreciate some clarifications on the topic, so I could justify not following such kind of behavior, and the hassle it creates from duplicating all declarations.

    Read the article

  • @dynamic property needs setter with multiple behaviors

    - by ambertch
    I have a class that contains multiple user objects and as such has an array of them as an instance variable: NSMutableArray *users; The tricky part is setting it. I am deserializing these objects from a server via Objective Resource, and for backend reasons users can only be returned as a long string of UIDs - what I have locally is a separate dictionary of users keyed to UIDs. Given the string uidString of comma separated UIDs I override the default setter and populate the actual user objects: @dynamic users; - (void)setUsers:(id)uidString { users = [NSMutableArray arrayWithArray: [[User allUsersDictionary] objectsForKeys:[(NSString*)uidString componentsSeparatedByString:@","]]]; } The problem is this: I now serialize these to database using SQLitePO, which stores these as the array of user objects, not the original string. So when I retrieve it from database the setter mistakenly treats this array of user objects as a string! Where I actually want to adjust the setter's behavior when it gets this object from DB vs. over the network. I can't just make the getter serialize back into a string without tearing up large code that reference this array of user objects, and I tried to detect in the setter whether I have a string or an array coming in: if ([uidString respondsToSelector:@selector(addObject)]) { // Already an array, so don't do anything - just assign users = uidString but no success... so I'm kind of stuck - any suggestions? Thanks in advance!

    Read the article

  • In a combobox, how do I determine the highlighted item (not selected item)?

    - by Harold Bamford
    First, fair warning: I am a complete newbie with C# and WPF. I have a combobox (editable, searchable) and I would like to be able to intercept the Delete key and remove the currently highlighted item from the list. The behavior I'm looking for is like that of MS Outlook when entering in email addresses. When you give a few characters, a dropdown list of potential matches is displayed. If you move to one of these (with the arrow keys) and hit Delete, that entry is permanently removed. I want to do that with an entry in the combobox. Here is the XAML (simplified): <ComboBox x:Name="Directory" KeyUp="Directory_KeyUp" IsTextSearchEnabled="True" IsEditable="True" Text="{Binding Path=CurrentDirectory, Mode=TwoWay}" ItemsSource="{Binding Source={x:Static self:Properties.Settings.Default}, Path=DirectoryList, Mode=TwoWay}" / The handler is: private void Directory_KeyUp(object sender, KeyEventArgs e) { ComboBox box = sender as ComboBox; if (box.IsDropDownOpen && (e.Key == Key.Delete)) { TrimCombobox("DirectoryList", box.HighlightedItem); // won't compile! } } When using the debugger, I can see box.HighlightedItem has the value I want but when I try and put in that code, it fails to compile with: System.Windows.Controls.ComboBox' does not contain a definition for 'HighlightedItem'... So: how do I access that value? Keep in mind that the item has not been selected. It is merely highlighted as the mouse hovers over it. Thanks for your help.

    Read the article

  • The same C# code produces different output in Visio Professional and Premium

    - by user615993
    I have built a simple conversion Add In, but its behavior is unfortunately different with the different Visio Editions (Visio 2010 Professional and Visio 2010 Premium). The Add In takes a Process-Diagram created with Shapes from Stencil_1.vss and creates a new slightly different Process-Diagram with Shapes from Stencil_2.vsd. It loops through a Visio page and for each shape founded creates a new shapes from new master shape (from Stencil_2.vsd) and drop it into the new page. Geometry, captions etc. are the same, only the shape-appearance is changed. Below is the source diagram: When I run the code into Visio 2010 Professional the swimlane shapes are drawn correctly. When I run the same code from Visio Premium the swimlane appearance and layout are mismatched: Both times i drop the SAME Shape("Swimlane" from the same stencil) into the Page with the SAME Code fragment: Visio.Master vm = swimlane_stencil.Masters.get_ItemU(@"Swimlane"); Visio.Shape TargetShape = targetPage.Drop(vm, shape_x, shape_y); How could I ensure, that the code produces any time the same (correct) output? Must I disable any (premium)features in the swimlane-shapesheet?

    Read the article

  • sql server - framework 4 - IIS 7 weird sort from db to page

    - by ila
    I am experiencing a strange behavior when reading a resultset from database in a calling method. The sort of the rows is different from what the database should return. My farm: - database server: sql server 2008 on a WinServer 2008 64 bit - web server: a couple of load balanced WinServer 2008 64 bit running IIS 7 The application runs on a v4.0 app pool, set to enable 32bit applications Here's a description of the problem: - a stored procedure is called, that returns a resultset sorted on a particular column - I can see thru profiler the call to the SP, if I run the statement I see correct sorting - the calling page gets the results, and before any further elaboration logs the rows immediately after the SP execution - the results are in a completely different order (I cannot even understand if they are sorted in any way) Some details on the Stored Procedure: - it is called by code using a SqlDatAdapter - it has also an output value (a count of the rows) that is read correctly - which sort field is to be used is passed as a parameter - makes use of temp tables to collect data and perform the desired sort Any idea on what I could check? Same code and same database work correctly in a test environment, 32 bit and not load balanced.

    Read the article

  • jQuery code works for console but not in-page.

    - by justSteve
    I have a form element defined as: <div class="field"> <div class="name"> <label for="User_LastName"> Last name: <span class="asterisk">*</span></label> </div> <div class="value"> <%= Html.TextBox("User.LastName", Model.LastName)%> <%= Html.ValidationMessage("User.LastName")%> </div> </div> and a jQuery selector that is supposed to detect when the input gets focus and highlight the parent: $("input").focus(function() { //watching for an event where an input form comes into focus $(this) .parent() .addClass("curFocus") .children("div") .toggle(); }); If i paste this code into firebug's console - things work as planned. However, i'm running this from a 'RenderPartial' .net mvc page. Other jQuery code sitting within the same $(document).ready(function() { block work correctly. The form uses html helpers to generate the inputs which might complicate the process somewhat - but even so... i'm seeing correct behavior when that code's in console but not in a 'real-time' page. How do i troubleshoot this?

    Read the article

  • ASP.NET 2.0 in Virtual Trying to Use SQL State Server

    - by user251660
    We have IIS 6 running on a W2003 Server. The root web site is running a v1.1 site. Under this site we have a virtual running a v2.0 site (with a separate application pool). The web.config for the root site is using SQL as its state server and has a 1.1 SQL state server database installed. The 2.0 virtual web.config does not need state and its web.config has no reference to a state server. When we attempt to call the virtual we receive this error message. "Unable to use SQL Server because ASP.NET version 2.0 Session State is not installed on the SQL server. Please install ASP.NET Session State SQL Server version 2.0 or above. This issue is currently only occurring on one web server. The rest are able to run the 2.0 virtual application. I also notice that if we call the 2.0 virtual with the IP address it does not generate the error, however if we call it with the host header name it generates the error (this behavior is only on the 1 web server with the error, all the others can be called with either the ip or host header without error). As an additional note the root and virtual are running with SSL. My theory is that the virtual 2.0 application is inheriting the 1.1 web.config state server entry from the root and when it looks at the state server it sees it as a 1.1 version and reports the error that it needs a 2.0 state server. I however cannot understand why the other servers are not behaving in this matter. All of the servers are on the same OS service pack as well as the same version of .net framework. Any ideas? Thanks

    Read the article

  • How is timezone handled in the lifecycle of an ADO.NET + SQL Server DateTime column?

    - by stimpy77
    Using SQL Server 2008. This is a really junior question and I could really use some elaborate information, but the information on Google seems to dance around the topic quite a bit and it would be nice if there was some detailed elaboration on how this works... Let's say I have a datetime column and in ADO.NET I set it to DateTime.UtcNow. 1) Does SQL Server store DateTime.UtcNow accordingly, or does it offset it again based on the timezone of where the server is installed, and then return it offset-reversed when queried? I think I know that the answer is "of course it stores it without offsetting it again" but want to be certain. So then I query for it and cast it from, say, an IDataReader column to a DateTime. As far as I know, System.DateTime has metadata that internally tracks whether it is a UTC DateTime or it is an offsetted DateTime, which may or may not cause .ToLocalTime() and .ToUniversalTime() to have different behavior depending on this state. So, 2) Does this casted System.DateTime object already know that it is a UTC DateTime instance, or does it assume that it has been offset? Now let's say I don't use UtcNow, I use DateTime.Now, when performing an ADO.NET INSERT or UPDATE. 3) Does ADO.NET pass the offset to SQL Server and does SQL Server store DateTime.Now with the offset metadata? So then I query for it and cast it from, say, an IDataReader column to a DateTime. 4) Does this casted System.DateTime object already know that it is an offset time, or does it assume that it is UTC?

    Read the article

  • Python re module becomes 20 times slower when called on greater than 101 different regex

    - by Wiil
    My problem is about parsing log files and removing variable parts on each lines to be able to group them. For instance: s = re.sub(r'(?i)User [_0-9A-z]+ is ', r"User .. is ", s) s = re.sub(r'(?i)Message rejected because : (.*?) \(.+\)', r'Message rejected because : \1 (...)', s) I have about 120+ matching rules like those above. I have found no performances issues while searching successively on 100 different regex. But a huge slow down comes when applying 101 regex. Exact same behavior happens when replacing my rules set by for a in range(100): s = re.sub(r'(?i)caught here'+str(a)+':.+', r'( ... )', s) Got 20 times slower when putting range(101) instead. # range(100) % ./dashlog.py file.bz2 == Took 2.1 seconds. == # range(101) % ./dashlog.py file.bz2 == Took 47.6 seconds. == Why such thing is happening ? And is there any known workaround ? (Happens on Python 2.6.6/2.7.2 on Linux/Windows.)

    Read the article

  • Strange befaviour of spring transaction support for JPA + Hibernate +@Transactional annotation

    - by abovesun
    I found out really strange behavior on relatively simple use case, probably I can't understand it because of not deep knowledges of spring @Transactional nature, but this is quite interesting. I have simple User dao that extends spring JpaDaoSupport class and contains standard save method: @Transactional public User save(User user) { getJpaTemplate().persist(user); return user; } If was working fine until I've add new method to same class: User getSuperUser(), this method should return user with isAdmin == true, and if there is no super user in db, method should create one. Thats how it was looking like: public User createSuperUser() { User admin = null; try { admin = (User) getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { return em.createQuery("select u from UserImpl u where u.admin = true").getSingleResult(); } }); } catch (EmptyResultDataAccessException ex) { User admin = new User('login', 'password'); admin.setAdmin(true); save(admin); // THIS IS THE POINT WHERE STRANGE THING COMING OUT } return admin; } As you see code is strange forward and I was very confused when found out that no transaction was created and committed on invocation of save(admin) method and no new user wasn't actually created despite @Transactional annotation. In result we have situation: when save() method invokes from outside of UserDAO class - @Transactional annotation counted and user successfully created, but if save() invokes from inside of other method of the same dao class - @Transactional annotation ignored. Here how I was change save() method to force it always create transaction. public User save(User user) { getJpaTemplate().execute(new JpaCallback() { public Object doInJpa(EntityManager em) throws PersistenceException { em.getTransaction().begin(); em.persist(user); em.getTransaction().commit(); return null; } }); return user; } As you see I manually invoke begin and commit. Any ideas?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >