Search Results

Search found 1527 results on 62 pages for 'breaks'.

Page 54/62 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • TFS and shared projects in multiple solutions

    - by David Stratton
    Our .NET team works on projects for our company that fall into distinct categories. Some are internal web apps, some are external (publicly facing) web apps, we also have internal Windows applications for our corporate office users, and Windows Forms apps for our retail locations (stores). Of course, because we hate code reuse, we have a ton of code that is shared among the different applications. Currently we're using SVN as our source control, and we've got our repository laid out like this: - = folder, | = Visual Studio Solution -SVN - Internet | Ourcompany.com | Oursecondcompany.com - Intranet | UniformOrdering website | MessageCenter website - Shared | ErrorLoggingModule | RegularExpressionGenerator | Anti-Xss | OrgChartModule etc... So.. The OurCompany.com solution in the Internet folder would have a website project, and it would also include the ErrorLoggingModule, RegularExpressionGenerator, and Anti-Xss projects from the shared directory. Similarly, our UniformOrdering website solution would have each of these projects included in the solution as well. We prefer to have a project reference to a .dll reference because, first of all, if we need to add or fix a function in the ErrorLoggingModule while working on the OurCompany.com website, it's right there. Also, this allows us to build each solution and see if changes to shared code break any other applications. This should work well on a build server as well if I'm correct. In SVN, there is no problem with this. SVN and Visual Studio aren't tied together in the way TFS's source control is. We never figured out how to work this type of structure in TFS when we were using it, because in TFS, the TFS project was always tied to a Visual Studio Solution. The Source Code repository was a child of the TFS Project, so if we wanted to do this, we had to duplicate the Shared code in each TFS project's source code repository. As my co-worker put it, this "breaks every known best practice about code reuse and simplicity". It was enough of a deal breaker for us that we switched to SVN. Now, however, we're faced with truly fixing our development processes, and the Application Lifecycle Management of TFS is pretty close to exactly what we want, and how we want to work. Our one sticking point is the shared code issue. We're evaluating other commercial and open source solutions, but since we're already paying for TFS with our MSDN Subscriptions, and TFS is pretty much exactly what we want, we'd REALLY like to find a way around this issue. Has anybody else faced this and come up with a solution? If you've seen an article or posting on this that you can share with me, that would help as well. As always, I'm open to answers like "You're looking at it all wrong, bonehead, HERE'S the way it SHOULD be done.

    Read the article

  • How to create StackedBarSeries with custom tooltip without losing standard colors

    - by Simon_Weaver
    I have a StackedBarSeries in Silverlight 4 charting (latest release). I have created a DataPointStyle called MyDataPointStyle for a custom tooltip. By itself this breaks the standard palette used for the different bars. I've applied a custom palette - as described in David Anson's blog to the chart. However when I have the DataPointStyle set for my SeriesDefinition objects it does not use this palette. I'm not sure what I'm missing - but David specifically says : ... it enables the use of DynamicResource (currently only supported by the WPF platform) to let users customize their DataPointStyle without inadvertently losing the default/custom Palette colors. (Note: A very popular request!) Unfortunately I'm inadvertently losing these colors - and I can't see why? <chartingToolkit:Chart.Palette> <dataviz:ResourceDictionaryCollection> <ResourceDictionary> <Style x:Key="DataPointStyle" TargetType="Control"> <Setter Property="Background" Value="Blue"/> </Style> </ResourceDictionary> <ResourceDictionary> <Style x:Key="DataPointStyle" TargetType="Control"> <Setter Property="Background" Value="Green"/> </Style> </ResourceDictionary> <ResourceDictionary> <Style x:Key="DataPointStyle" TargetType="Control"> <Setter Property="Background" Value="Red"/> </Style> </ResourceDictionary> </dataviz:ResourceDictionaryCollection> </chartingToolkit:Chart.Palette> <chartingToolkit:Chart.Series> <chartingToolkit:StackedBarSeries> <chartingToolkit:SeriesDefinition IndependentValueBinding="{Binding SKU}" DependentValueBinding="{Binding Qty}" DataPointStyle="{StaticResource MyDataPointStyle}" Title="Regular"/> <chartingToolkit:SeriesDefinition IndependentValueBinding="{Binding SKU}" DependentValueBinding="{Binding Qty}" DataPointStyle="{StaticResource MyDataPointStyle}" Title="FSP Orders"/> <chartingToolkit:StackedBarSeries.IndependentAxis> <chartingToolkit:CategoryAxis Title="SKU" Orientation="Y" FontStyle="Italic" AxisLabelStyle="{StaticResource LeftAxisStyle}"/> </chartingToolkit:StackedBarSeries.IndependentAxis> <chartingToolkit:StackedBarSeries.DependentAxis> <chartingToolkit:LinearAxis Orientation="X" ExtendRangeToOrigin="True" Minimum="0" ShowGridLines="True" /> </chartingToolkit:StackedBarSeries.DependentAxis> </chartingToolkit:StackedBarSeries > </chartingToolkit:Chart.Series> </chartingToolkit:Chart>

    Read the article

  • Using PHP substr() and strip_tags() while retaining formatting and without breaking HTML

    - by Peter
    I have various HTML strings to cut to 100 characters (of the stripped content, not the original) without stripping tags and without breaking HTML. Original HTML string (288 characters): $content = "<div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div over <div class='nestedDivClass'>there</div> </div> and a lot of other nested <strong><em>texts</em> and tags in the air <span>everywhere</span>, it's a HTML taggy kind of day.</strong></div>"; Standard trim: Trim to 100 characters and HTML breaks, stripped content comes to ~40 characters: $content = substr($content, 0, 100)."..."; /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div ove... */ Stripped HTML: Outputs correct character count but obviously looses formatting: $content = substr(strip_tags($content)), 0, 100)."..."; /* output: With a span over here and a nested div over there and a lot of other nested texts and tags in the ai... */ Partial solution: using HTML Tidy or purifier to close off tags outputs clean HTML but 100 characters of HTML not displayed content. $content = substr($content, 0, 100)."..."; $tidy = new tidy; $tidy->parseString($content); $tidy->cleanRepair(); /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div ove</div></div>... */ Challenge: To output clean HTML and n characters (excluding character count of HTML elements): $content = cutHTML($content, 100); /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div over <div class='nestedDivClass'>there</div> </div> and a lot of other nested <strong><em>texts</em> and tags in the ai</strong></div>..."; Similar Questions How to clip HTML fragments without breaking up tags Cutting HTML strings without breaking HTML tags

    Read the article

  • Fluid CSS: floating column with max-width and overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've reproduced the issue with this bare-minimum scenario: <div style="float: left; max-width: 460px; border: 1px solid red"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> <pre style="max-width: 700px; border: 1px solid blue"> function foo() { // Lorem ipsum dolor sit amet, consectetur adipisicing elit } </pre> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </div> I noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post markup which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode. Update I've done further testing to observe that max-width seems to get ignored when the element has a float: left. I glanced at the W3C box model chapter but couldn't immediately see an explicit mention of this behaviour. Any pointers?

    Read the article

  • XNA - Keyboard text input

    - by Sekhat
    Okay, so basically I want to be able to retrieve keyboard text. Like entering text into a text field or something. I'm only writing my game for windows. I've disregarded using Guide.BeginShowKeyboardInput because it breaks the feel of a self contained game, and the fact that the Guide always shows XBOX buttons doesn't seem right to me either. Yes it's the easiest way, but I don't like it. Next I tried using System.Windows.Forms.NativeWindow. I created a class that inherited from it, and passed it the Games window handle, implemented the WndProc function to catch WM_CHAR (or WM_KEYDOWN) though the WndProc got called for other messages, WM_CHAR and WM_KEYDOWN never did. So I had to abandon that idea, and besides, I was also referencing the whole of Windows forms, which meant unnecessary memory footprint bloat. So my last idea was to create a Thread level, low level keyboard hook. This has been the most successful so far. I get WM_KEYDOWN message, (not tried WM_CHAR yet) translate the virtual keycode with Win32 funcation MapVirtualKey to a char. And I get my text! (I'm just printing with Debug.Write at the moment) A couple problems though. It's as if I have caps lock on, and an unresponsive shift key. (Of course it's not however, it's just that there is only one Virtual Key Code per key, so translating it only has one output) and it adds overhead as it attaches itself to the Windows Hook List and isn't as fast as I'd like it to be, but the slowness could be more due to Debug.Write. Has anyone else approached this and solved it, without having to resort to an on screen keyboard? or does anyone have further ideas for me to try? thanks in advance. note: This is cross posted from the XNA Creators Forums, so if I get an answer there I'll post it here and Vice-Versa Question asked by Jimmy Maybe I'm not understanding the question, but why can't you use the XNA Keyboard and KeyboardState classes? My comment: It's because though you can read keystates, you can't get access to typed text as and how it is typed by the user. So let me further clarify. I want to implement being able to read text input from the user as if they are typing into textbox is windows. The keyboard and KeyboardState class get states of all keys, but I'd have to map each key and combination to it's character representation. This falls over when the user doesn't use the same keyboard language as I do especially with symbols (my double quotes is shift + 2, while american keyboards have theirs somewhere near the return key).

    Read the article

  • C# Design Questions

    - by guazz
    How to approach unit testing of private methods? I have a class that loads Employee data into a database. Here is a sample: public class EmployeeFacade { public Employees EmployeeRepository = new Employees(); public TaxDatas TaxRepository = new TaxDatas(); public Accounts AccountRepository = new Accounts(); //and so on for about 20 more repositories etc. public bool LoadAllEmployeeData(Employee employee) { if (employee == null) throw new Exception("..."); EmployeeRepository emps = new EmployeeRepository(); bool exists = emps.FetchExisting(emps.Id); if (!exists) { emps.AddNew(); } try { emps.Id = employee.Id; emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName; emps.SomeOtherAttribute; } catch() {} try { emps.Save(); } catch(){} try { LoadorUpdateTaxData(employee.TaxData); } catch() {} try { LoadorUpdateAccountData(employee.AccountData); } catch() {} ... etc. for about 20 more other employee objects } private bool LoadorUpdateTaxData(employeeId, TaxData taxData) { if (taxData == null) throw new Exception("..."); ...same format as above but using AccountRepository } private bool LoadorUpdateAccountData(employee.TaxData) { ...same format as above but using TaxRepository } } I am writing an application to take serialised objects(e.g. Employee above) and load the data to the database. I have a few design question that I would like opinions on: A - I am calling this class "EmployeeFacade" because I am (attempting?) to use the facade pattern. Is it good practace to name the pattern on the class name? B - Is it good to call the concrete entities of my DAL layer classes "Repositories" e.g. "EmployeeRepository" ? C - Is using the repositories in this way sensible or should I create a method on the repository itself to take, say, the Employee and then load the data from there e.g. EmployeeRepository.LoadAllEmployeeData(Employee employee)? I am aim for cohesive class and but this will requrie the repository to have knowledge of the Employee object which may not be good? D - Is there any nice way around of not having to check if an object is null at the begining of each method? E - I have a EmployeeRepository, TaxRepository, AccountRepository declared as public for unit testing purpose. These are really private enities but I need to be able to substitute these with stubs so that the won't write to my database(I overload the save() method to do nothing). Is there anyway around this or do I have to expose them? F - How can I test the private methods - or is this done (something tells me it's not)? G- "emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName;" this breaks the Law of Demeter but how do I adjust my objects to abide by the law?

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • Filtering a collection based on filtering rules

    - by Mike
    I have an observable collection of Entities, with each entity having a status added, deleted, modified and cancelled. I have four buttons (toggle) when clicked should filter my collection as below: If I select the button Added, then my collection should contain entities with status added. If I select the button Deleted and Added, then my collection should contain entities with status Deleted AND entities with status Added, none of the rest. If I select the button Deleted,Added and Modified, then my collection should contain entities with status Deleted, Added AND Modified. . . so on. If I unselect one of the buttons, it should remove those entities from the collection with that status. For example if I unselect Deleted, but select Added and Modified, then my collection should contain items with Added and Modified status and NOT Deleted ones. For implementing this I have created a master collection and a filtered collection. The Filter collection gets filtered based on the selections and unselections. The following is my code: private bool _clickedAdded; public bool ClickedAdded { get { return _clickedAdded; } set { _clickedAdded = value; if(!_clickedAdded) FilterAny(typeof(Added)); } } private bool _clickedDeleted; public bool ClickedDeleted { get { return _clickedDeleted; } set { _clickedDeleted = value; if (!_clickedDeleted) FilterAny(typeof(Deleted)); } } private bool _clickedModified; public bool ClickedModified { get { return _clickedModified; } set { _clickedModified = value; if (!_clickedModified) FilterAny(typeof(Modified)); } } private void FilterAny(Type status) { Func<Entity, bool> predicate = entity => entity.Status.GetType() != status; var filteredItems = MasterEntites.Where(predicate); FilteredEntities = new ObservableCollection<Entity>(filteredItems); } This however breaks the above rules - for example if I have all selected, and then I remove Added followed by deleted then it still shows the list of Added, Modified and Cancelled. It should be just Modified and Cancelled in the filtered collection. Can you please help me in solving this issue? Also do I need 2 different list to solve this. Please note that I'm using .NET 3.5.

    Read the article

  • user generated / user specific functions

    - by pedalpete
    I'm looking for the most elegant and secure method to do the following. I have a calendar, and groups of users. Users can add events to specific days on the calendar, and specify how long each event lasts for. I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events. For example, if event is 2 hours, include a 20min break. for each event, require 30 minutes before start of next event. The same group that has asked for an event of 2 hours to include a 20 min break, could also require that an event 3 hours include a 30 minute break. In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time. However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event. I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me. Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy. I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db. Is there a better way of doing this? I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables. I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction. If it helps, my table contains eventid, user, group, startDate, startTime, endDate, endTime, type The json for the event which I return to the user is {"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"} where for example, duration length is 2.5 and duration hours is 2:30.

    Read the article

  • Custom NSView in NSMenuItem not receiving mouse events

    - by Dennis
    I have an NSMenu popping out of an NSStatusItem using popUpStatusItemMenu. These NSMenuItems show a bunch of different links, and each one is connected with setAction: to the openLink: method of a target. This arrangement has been working fine for a long time. The user chooses a link from the menu and the openLink: method then deals with it. Unfortunately, I recently decided to experiment with using NSMenuItem's setView: method to provide a nicer/slicker interface. Basically, I just stopped setting the title, created the NSMenuItem, and then used setView: to display a custom view. This works perfectly, the menu items look great and my custom view is displayed. However, when the user chooses a menu item and releases the mouse, the action no longer works (i.e., openLink: isn't called). If I just simply comment out the setView: call, then the actions work again (of course, the menu items are blank, but the action is executed properly). My first question, then, is why setting a view breaks the NSMenuItem's action. No problem, I thought, I'll fix it by detecting the mouseUp event in my custom view and calling my action method from there. I added this method to my custom view: - (void)mouseUp:(NSEvent *)theEvent { NSLog(@"in mouseUp"); } No dice! This method is never called. I can set tracking rects and receive mouseEntered: events, though. I put a few tests in my mouseEntered routine, as follows: if ([[self window] ignoresMouseEvents]) { NSLog(@"ignoring mouse events"); } else { NSLog(@"not ignoring mouse events"); } if ([[self window] canBecomeKeyWindow]) { dNSLog((@"canBecomeKeyWindow")); } else { NSLog(@"not canBecomeKeyWindow"); } if ([[self window] isKeyWindow]) { dNSLog((@"isKeyWindow")); } else { NSLog(@"not isKeyWindow"); } And got the following responses: not ignoring mouse events canBecomeKeyWindow not isKeyWindow Is this the problem? "not isKeyWindow"? Presumably this isn't good because Apple's docs say "If the user clicks a view that isn’t in the key window, by default the window is brought forward and made key, but the mouse event is not dispatched." But there must be a way do detect these events. HOW? Adding: [[self window] makeKeyWindow]; has no effect, despite the fact that canBecomeKeyWindow is YES.

    Read the article

  • .NET asmx web services: serialize object property as string property to support versioning

    - by mcliedtk
    I am in the process of upgrading our web services to support versioning. We will be publishing our versioned web services like so: http://localhost/project/services/1.0/service.asmx http://localhost/project/services/1.1/service.asmx One requirement of this versioning is that I am not allowed to break the original wsdl (the 1.0 wsdl). The challenge lies in how to shepherd the newly versioned classes through the logic that lies behind the web services (this logic includes a number of command and adapter classes). Note that upgrading to WCF is not an option at the moment. To illustrate this, let's consider an example with Blogs and Posts. Prior to the introduction of versions, we were passing concrete objects around instead of interfaces. So an AddPostToBlog command would take in a Post object instead of an IPost. // Old AddPostToBlog constructor. public AddPostToBlog(Blog blog, Post post) { // constructor body } With the introduction of versioning, I would like to maintain the original Post while adding a PostOnePointOne. Both Post and PostOnePointOne will implement the IPost interface (they are not extending an abstract class because that inheritance breaks the wsdl, though I suppose there may be a way around that via some fancy xml serialization tricks). // New AddPostToBlog constructor. public AddPostToBlog(Blog blog, IPost post) { // constructor body } This brings us to my question regarding serialization. The original Post class has an enum property named Type. For various cross-platform compatibility issues, we are changing our enums in our web services to strings. So I would like to do the following: // New IPost interface. public interface IPost { object Type { get; set; } } // Original Post object. public Post { // The purpose of this attribute would be to maintain how // the enum currently is serialized even though now the // type is an object instead of an enum (internally the // object actually is an enum here, but it is exposed as // an object to implement the interface). [XmlMagic(SerializeAsEnum)] object Type { get; set; } } // New version of Post object public PostOnePointOne { // The purpose of this attribute would be to force // serialization as a string even though it is an object. [XmlMagic(SerializeAsString)] object Type { get; set; } } The XmlMagic refers to an XmlAttribute or some other part of the System.Xml namespace that would allow me to control the type of the object property being serialized (depending on which version of the object I am serializaing). Does anyone know how to accomplish this?

    Read the article

  • jquery addresses and live method

    - by Jay
    //deep linking $.fn.ajaxAnim = function() { $(this).animW(); $(this).html('<div class="load-prog">loading...</div>'); } $("document").ready(function(){ contM = $('#main-content'); contS = $('#second-content'); $(contM).hide(); $(contM).addClass('hidden'); $(contS).hide(); $(contS).addClass('hidden'); function loadURL(URL) { //console.log("loadURL: " + URL); $.ajax({ url: URL, beforeSend: function(){$(contM).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $('.post-content').initializeScroll(); } }); } // Event handlers $.address.init(function(event) { //console.log("init: " + $('[rel=address:' + event.value + ']').attr('href')); }).change(function(event) { evVal = event.value; if(evVal == '/'){return false;} else{ $.ajax({ url: $('[rel=address:' + evVal + ']').attr('href'), beforeSend: function(){$(contM).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contM).html(data); $('.post-content').initializeScroll(); }}); } //console.log("change"); }) $('.update-main a, a.update-main').live('click', function(){ loadURL($(this).attr('href')); return false; }); $(".update-second a, a.update-second").live('click', function() { var link = $(this); $.ajax({ url: link.attr("href"), beforeSend: function(){$(contS).ajaxAnim();}, type: "POST", dataType: 'html', data: {post_loader: 1}, success: function(data){ $(contS).html(data); $('.post-content').initializeScroll(); }}); return false; }); }); I'm using jquery addresses to update content while maintaining a useful url. When clicking on links in a main nav, the url is updated properly, but when links are loaded dynamically with ajax, the url address function breaks. I have made 'click' events live, allowing for content to be loaded via dynamically loaded links, but I can't seem to make the address event listener live, but this seems to be the only way to make this work. Is my syntax wrong if I change this : $.address.change(function(event) { to this: $.address.live('change', function(event) { or does the live method not work with this plugin?

    Read the article

  • How do you unit test a unit test?

    - by FlySwat
    I was watching Rob Connerys webcasts on the MVCStoreFront App, and I noticed he was unit testing even the most mundane things, things like: public Decimal DiscountPrice { get { return this.Price - this.Discount; } } Would have a test like: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,80); } While, I am all for unit testing, I sometimes wonder if this form of test first development is really beneficial, for example, in a real process, you have 3-4 layers above your code (Business Request, Requirements Document, Architecture Document), where the actual defined business rule (Discount Price is Price - Discount) could be misdefined. If that's the situation, your unit test means nothing to you. Additionally, your unit test is another point of failure: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,90); } Now the test is flawed. Obviously in a simple test, it's no big deal, but say we were testing a complicated business rule. What do we gain here? Fast forward two years into the application's life, when maintenance developers are maintaining it. Now the business changes its rule, and the test breaks again, some rookie developer then fixes the test incorrectly...we now have another point of failure. All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work? What am I missing here? Please teach me to love TDD, as I'm having a hard time accepting it as useful so far. I want too, because I want to stay progressive, but it just doesn't make sense to me. EDIT: A couple people keep mentioned that testing helps enforce the spec. It has been my experience that the spec has been wrong as well, more often than not, but maybe I'm doomed to work in an organization where the specs are written by people who shouldn't be writing specs.

    Read the article

  • How to represent different entities that have identical behavior?

    - by Dominik
    I have several different entities in my domain model (animal species, let's say), which have a few properties each. The entities are readonly (they do not change state during the application lifetime) and they have identical behavior (the differ only by the values of properties). How to implement such entities in code? Unsuccessful attempts: Enums I tried an enum like this: enum Animals { Frog, Duck, Otter, Fish } And other pieces of code would switch on the enum. However, this leads to ugly switching code, scattering the logic around and problems with comboboxes. There's no pretty way to list all possible Animals. Serialization works great though. Subclasses I also thought about where each animal type is a subclass of a common base abstract class. The implementation of Swim() is the same for all Animals, though, so it makes little sense and serializability is a big issue now. Since we represent an animal type (species, if you will), there should be one instance of the subclass per application, which is hard and weird to maintain when we use serialization. public abstract class AnimalBase { string Name { get; set; } // user-readable double Weight { get; set; } Habitat Habitat { get; set; } public void Swim(); { /* swim implementation; the same for all animals but depends uses the value of Weight */ } } public class Otter: AnimalBase{ public Otter() { Name = "Otter"; Weight = 10; Habitat = "North America"; } } // ... and so on Just plain awful. Static fields This blog post gave me and idea for a solution where each option is a statically defined field inside the type, like this: public class Animal { public static readonly Animal Otter = new Animal { Name="Otter", Weight = 10, Habitat = "North America"} // the rest of the animals... public string Name { get; set; } // user-readable public double Weight { get; set; } public Habitat Habitat { get; set; } public void Swim(); } That would be great: you can use it like enums (AnimalType = Animal.Otter), you can easily add a static list of all defined animals, you have a sensible place where to implement Swim(). Immutability can be achieved by making property setters protected. There is a major problem, though: it breaks serializability. A serialized Animal would have to save all its properties and upon deserialization it would create a new instance of Animal, which is something I'd like to avoid. Is there an easy way to make the third attempt work? Any more suggestions for implementing such a model?

    Read the article

  • NSZombieEnabled breaking working code?

    - by Gordon Fontenot
    I have the following method in UIImageManipulation.m: +(UIImage *)scaleImage:(UIImage *)source toSize:(CGSize)size { UIImage *scaledImage = nil; if (source != nil) { UIGraphicsBeginImageContext(size); [source drawInRect:CGRectMake(0, 0, size.width, size.height)]; scaledImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } return scaledImage; } I am calling it in a different view with: imageFromFile = [UIImageManipulator scaleImage:imageFromFile toSize:imageView.frame.size]; (imageView is a UIImageView allocated earlier) This is working great in my code. I resizes the image perfectly, and throws zero errors. I also don't have anything pop up under build - analyze. But the second I turn on NSZombieEnabled to debug a different EXC_BAD_ACCESS issue, the code breaks. Every single time. I can turn NSZombieEnabled off, code runs great. I turn it on, and boom. Broken. I comment out the call, and it works again. Every single time, it gives me an error in the console: -[UIImage release]: message sent to deallocated instance 0x3b1d600. This error doesn't appear if `NSZombieEnabled is turned off. Any ideas? --EDIT-- Ok, This is killing me. I have stuck breakpoints everywhere I can, and I still cannot get a hold of this thing. Here is the full code when I call the scaleImage method: -(void)setupImageButton { UIImage *imageFromFile; if (object.imageAttribute == nil) { imageFromFile = [UIImage imageNamed:@"no-image.png"]; } else { imageFromFile = object.imageAttribute; } UIImage *scaledImage = [UIImageManipulator scaleImage:imageFromFile toSize:imageButton.frame.size]; UIImage *roundedImage = [UIImageManipulator makeRoundCornerImage:scaledImage :10 :10 withBorder:YES]; [imageButton setBackgroundImage:roundedImage forState:UIControlStateNormal]; } The other UIImageManipulator method (makeRoundCornerImage) shouldn't be causing the error, but just in case I'm overlooking something, I threw the entire file up on github here. It's something about this method though. Has to be. If I comment it out, it works great. If I leave it in, Error. But it doesn't throw errors with NSZombieEnabled turned off ever.

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

  • NSXMLParser, Issue with ASCII Character Set

    - by Ansari
    Hi all <Feeds> <channel> <ctitle>YouTube</ctitle> <cdescription>YouTube - Recently added videos</cdescription> <items> <recentlyAdded> <item> <serverItemId>1</serverItemId> <title>Fan Video CARS</title> <author>mikar1</author> <guid isPermaLink='false'></guid> <link>http://www.youtube.com/watch?v=y7ssHOBFvGk&amp;feature=youtube_gdata</link> <pubDate></pubDate> <description> <descriptionTitle>Fan Video CARS</descriptionTitle> <descriptionText>THE REALSONG OF THIS VIDEOS IS REAL GONE, BUT FOR COPYRIGHTS RASONS.....YOUTUBE FORCE ME A CHANGE THE SONG :s Un pequeño video, de la pelicula Cars!</descriptionText> <added></added> <airDate></airDate> <duration></duration> <Views></Views> <ratings>4.340909</ratings> <From></From> </description> <thumbnail> <height>100</height> <width>100</width> <url>http://i.ytimg.com/vi/y7ssHOBFvGk/2.jpg</url> </thumbnail> </item> </recentlyAdded> </items> </channel> I am using NSXMLParser, and when it reaches the it blows up. It breaks the text to pieces "THE REALSONG OF THIS VIDEOS IS REAL GONE, BUT FOR COPYRIGHTS RASONS.....YOUTUBE FORCE ME A CHANGE THE SONG :s Un peque" And next should be "ño" but it just quit the parsing there and further tags are being handled. :( It always does with the ISO 8859 1 Character cames in ) Any quick idea ??? Thanks in Advance ..........

    Read the article

  • PulpCore music playback - loop sound and animate volume

    - by Peter Perhác
    I have been experimenting with PulpCore, trying to create my own tower defence game (not-playable yet), and I am enjoying it very much I ran into a problem that I can't quite figure out. I extended PulpCore with the JOrbis thing to allow OGG files to be played. Works fine. However, pulpCore seems to have a problem with looping the sound WHILE animating the volume level. I tried this with wav file too, to make sure it isn't jOrbis that breaks it. The code is like this: Sound bgMusic = Sound.load("music/music.ogg"); Playback musicPlayback; ... musicVolume = new Fixed(0.75); musicPlayback = bgMusic.loop(musicVolume); //TODO figure out why it's NOT looping when volume is animated // musicVolume.animate(0, musicVolume.get(), FADE_IN_TIME); This code, for as long as the last line is commented out, plays the music.ogg again and again in an endless loop (which I can stop by calling stop on the Playback object returned from loop(). However, I would like the music to fade in smoothly, so following the advice of the PulpCore API docs, I added the last line which will create the fade-in but the music will only play once and then stop. I wonder why is that? Here is a bit of the documentation: Playback pulpcore.sound.Sound.loop(Fixed level) Loops this sound clip with the specified volume level (0.0 to 1.0). The level may have a property animation attached. Parameters: level Returns: a Playback object for this unique sound playback (one Sound can have many simultaneous Playback objects) or null if the sound could not be played. So what could be the problem? I repeat, with the last line, the sound fades in but doesn't loop, without it it loops but starts with the specified 0.75 volume level. Why can't I animate the volume of the looped music playback? What am I doing wrong? Anyone has any experience with pulpCore and has come across this problem? Anyone could please download PulpCore and try to loop music which fades-in (out)? note: I need to keep a reference to the Playback object returned so I can kill music later.

    Read the article

  • Have problems designing input and output for a calculator app

    - by zoul
    Hello! I am writing a button calculator. I have the code split into model, view and a controller. The model knows nothing about formatting, it is only concerned with numbers. All formatting is done in the view. The model gets its input as keypresses, each keypress is a part of an enum: typedef enum { kButtonUnknown = 0, kButtonMemoryClear = 100, kButtonMemoryPlus = 112, kButtonMemoryMinus = 109, kButtonMemoryRecall = 114, kButtonClear = 99, … }; When the user presses a button (say 1), the model receives a button code (kButtonNum1), adds the corresponding number to a string input buffer ("1") and updates the numeric output value (1.0). The controller then passes the numeric output value to the view that formats it (1). This is all plain, simple and clean, but does not really work. The problem is that when user enters a part of a number (say 0.00, going to enter 0.001), the input does not survive the way through model to view and the display says 0 instead of 0.00. I know why this happens ("0.00"::string parses to 0::double and that gets formatted as 0). What I don’t know is how to design the calculator so that the code stays clean and simple and the numbers will show up on screen exactly as the user types them. I’ve already come with some kind of solution, but that’s essentially a hack and breaks the beautiful and simple flow from the calculator model to the display. Ideas? Current solution keeps track of the calculator state. If the calculator is building a number, I take the calculator input buffer (a string) and directly set the display contents (also a string). Otherwise I take the proper path, ie. take the numeric calculator output, pass it to the view as a double and the view uses its internal formatter to create a string for the display. Example input: input | display | mode ------+---------+------------ 0 | 0 | from string 0. | 0. | from string 0.0 | 0.0 | from string 0.0+ | 0 | from number This is ugly. (1) The calculator has to expose its input buffer and state. (2) The view has to expose its display and allow setting its contents directly using a string. (3) I have to duplicate some of the formatting code to format the string I got from the calculator input buffer. If user enters 12345.000, I have to display 12,345.000 and therefore I’ve got to have a commification code for strings. Yuck.

    Read the article

  • How to get jQuery to work with Prototype

    - by thinkfuture
    Ok so here is the situation. Been pulling my hair out on this one. I'm a noob at this. Only been using rails for about 6 weeks. I'm using the standard setup package, and my code leverages prototype helpers heavily. Like I said, noob ;) So I'm trying to put in some jQuery effects, like PrettyPhoto. But what happens is that when the page is first loaded, PrettyPhoto works great. However, once someone uses a Prototype helper, like a link created with link_to_remote, Prettyphoto stops working. I've tried jRails, all of the fixes proposed on the JQuery site to stop conflicts... http://docs.jquery.com/Using_jQuery_with_Other_Libraries ...even done some crazy things likes renaming all of the $ in prototype.js to $$$ to no avail. Either the prototype helpers break, or jQuery breaks. Seems nothing I do can get these to work together. Any ideas? Here is part of my application.html.erb <%= javascript_include_tag 'application' %> <%= javascript_include_tag 'tooltip' %> <%= javascript_include_tag 'jquery' %> <%= javascript_include_tag 'jquery-ui' %> <%= javascript_include_tag "jquery.prettyPhoto" %> <%= javascript_include_tag 'prototype' %> <%= javascript_include_tag 'scriptalicious' %> </head> <body> <script type="text/javascript" charset="utf-8"> jQuery(document).ready( function() { jQuery("a[rel^='prettyPhoto']").prettyPhoto(); }); </script> If I put prototype before jquery, the prototype helpers don't work If I put the noconflict clause in, neither works. Thanks in advance! Chris BTW: when I try this, from the jQuery site: <script> jQuery.noConflict(); // Use jQuery via jQuery(...) jQuery(document).ready(function(){ jQuery("div").hide(); }); // Use Prototype with $(...), etc. $('someid').hide(); </script> my page disappears!

    Read the article

  • Installing my sdist from PyPI puts the files in the wrong places

    - by Tartley
    Hey. My problem is that when I upload my Python package to PyPI, and then install it from there using pip, my app breaks because it installs my files into completely different locations than when I simply install the exact same package from a local sdist. Installing from the local sdist puts files on my system like this: /Python27/ Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) data/ (images and shader source) doc/ (html) examples/ (.py scripts that use the library) gloopy/ (source) This is much as I'd expect, and works fine (e.g. my source can find my data dir, because they lie next to each other, just like they do in development.) If I upload the same sdist to PyPI and then install it from there, using pip, then things look very different: /Python27/ data/ (images and shader source) doc/ (html) Lib/ site-packages/ gloopy-0.1.alpha-py2.7.egg/ (egg and install info files) gloopy/ (source files) examples/ (.py scripts that use the library) This doesn't work at all - my app can't find its data files, plus obviously it's a mess, polluting the top-level /python27 directory with all my junk. What am I doing wrong? How do I make the pip install behave like the local sdist install? Is that even what I should be trying to achieve? Details I have setuptools installed, and also distribute, and I'm calling distribute_setup.use_setuptools() WindowsXP, Python2.7. My development directory looks like this: /gloopy /data (image files and GLSL shader souce read at runtime) /doc (html files) /examples (some scripts to show off the library) /gloopy (the library itself) My MANIFEST.in mentions all the files I want to be included in the sdist, including everything in the data, examples and doc directories: recursive-include data *.* recursive-include examples *.py recursive-include doc/html *.html *.css *.js *.png include LICENSE.txt include TODO.txt My setup.py is quite verbose, but I guess the best thing is to include it here, right? I also includes duplicate references to the same data / doc / examples directories as are mentioned in the MANIFEST.in, because I understand this is required in order for these files to be copied from the sdist to the system during install. NAME = 'gloopy' VERSION= __import__(NAME).VERSION RELEASE = __import__(NAME).RELEASE SCRIPT = None CONSOLE = False def main(): import sys from pprint import pprint from setup_utils import distribute_setup from setup_utils.sdist_setup import get_sdist_config distribute_setup.use_setuptools() from setuptools import setup description, long_description = read_description() config = dict( name=name, version=version, description=description, long_description=long_description, keywords='', packages=find_packages(), data_files=[ ('examples', glob('examples/*.py')), ('data/shaders', glob('data/shaders/*.*')), ('doc', glob('doc/html/*.*')), ('doc/_images', glob('doc/html/_images/*.*')), ('doc/_modules', glob('doc/html/_modules/*.*')), ('doc/_modules/gloopy', glob('doc/html/_modules/gloopy/*.*')), ('doc/_modules/gloopy/geom', glob('doc/html/_modules/gloopy/geom/*.*')), ('doc/_modules/gloopy/move', glob('doc/html/_modules/gloopy/move/*.*')), ('doc/_modules/gloopy/shapes', glob('doc/html/_modules/gloopy/shapes/*.*')), ('doc/_modules/gloopy/util', glob('doc/html/_modules/gloopy/util/*.*')), ('doc/_modules/gloopy/view', glob('doc/html/_modules/gloopy/view/*.*')), ('doc/_static', glob('doc/html/_static/*.*')), ('doc/_api', glob('doc/html/_api/*.*')), ], classifiers=[ 'Development Status :: 1 - Planning', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: Microsoft :: Windows', 'Programming Language :: Python :: 2.7', ], # see classifiers http://pypi.python.org/pypi?:action=list_classifiers ) config.update(dict( author='Jonathan Hartley', author_email='[email protected]', url='http://bitbucket.org/tartley/gloopy', license='New BSD', ) ) if '--verbose' in sys.argv: pprint(config) setup(**config) if __name__ == '__main__': main()

    Read the article

  • Best Practice: Legitimate Cross-Site Scripting

    - by Ryan
    While cross-site scripting is generally regarded as negative, I've run into several situations where it's necessary. I was recently working within the confines of a very limiting content management system. I needed to include database code within the page, but the hosting server didn't have anything usable available. I set up a couple barebones scripts on my own server, originally thinking that I could use AJAX to import the contents of my scripts directly into the template of the CMS (thus retaining dynamic images, menu items, CSS, etc.). I was wrong. Due to the limitations of XMLHttpRequest objects, it's not possible to grab content from a different domain. So I thought "iFrame" - even though I'm not a fan of frames, I thought that I could create a frame that matched the width and height of the content, so that it would appear native. Again, I was blocked by cross-site scripting "protections." While I could indeed load a remote file into the iFrame, I couldn't execute JavaScript to modify its size on either the host page or inside the loaded page. In this particular scenario, I wasn't able to point a subdomain to my server. I also couldn't create a script on the CMS server that could proxy content from my server, so my last thought was to use a remote JavaScript. A remote JavaScript works. It breaks when the user has JavaScript disabled, which is a downside; but it works. The "problem" I was having with using a remote JavaScript was that I had to use the JS function document.write() to output any content. Any output that isn't JS causes script errors. In addition to using document.write() for every line, you also have to ensure that the content is escaped - or else you end up with more script errors. My solution was as follows: My script received a GET parameter ("page") and then looked for the file ({$page}.php), and read the contents into a variable. However, I had to use awkward buffering techniques in order to actually execute the included scripts (for things like database interaction) then strip the final content of all line break characters ("\n") followed by escaping all required characters. The end result is that my original script (which outputs JavaScript) accesses seemingly "standard" scripts on my server and converts their standard output to JavaScript for displaying within the CMS template. While this solution works, it seems like there may be a better way to accomplish the same thing. What is the best way to make cross-site scripting work specifically for the purpose of including content from a completely different domain?

    Read the article

  • Sharepoint web part stops working because of Resources.en-US.resx file

    - by Eric C
    I've been developing a Sharepoint web part, which had been working fine upon deployment. The web part has been developed with WSP Builder, packaged up and then deployed via stsadm. The web part has been deployed tens, if not a hundred times to the dev box with no problems. Now, the web part throws an error which breaks the page it's on: Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.HandleException(Exception ex) +62 NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.OnLoad(EventArgs e) +214 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 When looking through my Sharepoint logs, I find these errors repeated over and over which correspond to the time the web part was attempted to be loaded: 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 72kg High (#2: Cannot open "Resources.en-US.resx": no such file or folder.) 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e26 Medium Failed to open the language resource for Fea367b94a9-4a15-42ba-b4a2-32420363e018 keyfile Resources. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "XomlUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'XomlUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "RulesUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'RulesUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". I've retracted the web part manually through Solution Management, retracted through stsadm, checked for the existence of the resource file, which is nowhere to be found. I'm pretty much at a loss to why this happened or how to resolve it.

    Read the article

  • Problem with type coercion and string concatenation in JavaScript in Greasemonkey script on Firefox

    - by Yi Jiang
    I'm creating a GreaseMonkey script to improve the user interface of the 10k tools Stack Overflow uses. I have encountered an unreproducible and frankly bizarre problem that has confounded me and the others in the JavaScript room on SO Chat. We have yet to find the cause after several lengthy debugging sessions. The problematic script can be found here. Source - Install The problem occurs at line 85, the line after the 'vodoo' comment: return (t + ' (' + +(+f.offensive + +f.spam) + ')'); It might look a little weird, but the + in front of the two variables and the inner bracket is for type coercion, the inner middle + is for addition, and the other ones are for concatenation. Nothing special, but observant reader might note that type coercion on the inner bracket is unnecessary, since both are already type coerced to numbers, and type coercing result is useless when they get concatenated into a string anyway. Not so! Removing the + breaks the script, causing f.offensive and f.spam to be concatenated instead of added together. Adding further console.log only makes things more confusing: console.log(f.offensive + f.spam); // 50 console.log('' + (+f.offensive + +f.spam)); // 5, but returning this yields 50 somehow console.log('' + (+f.offensive + +f.spam) + ''); // 50 Source: http://chat.stackoverflow.com/transcript/message/203261#203261 The problem is that this is unreproducible - running scripts like console.log('a' + (+'3' + +'1') + 'b'); in the Firebug console yields the correct result, as does (function(){ return 'a' + (+'3' + +'1') + 'b'; })(); Even pulling out large chunks of the code and running them in the console does not reproduce this bug: $('.post-menu a[id^=flag-post-]').each(function(){ var f = {offensive: '4', spam: '1'}; if(f){ $(this).text(function(i, t){ // Vodoo - please do not remove the '+' in front of the inner bracket return (t + ' (' + +(+f.offensive + +f.spam) + ')'); }); } }); Tim Stone in the chatroom has reproduction instruction for those who are below 10k. This bug only appears in Firefox - Chrome does not appear to exhibit this problem, leading me to believe that this may be a problem with either Firefox's JavaScript engine, or the Greasemonkey add-on. Am I right? I can be found in the JavaScript room if you want more detail and/or want to discuss this.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >