Search Results

Search found 41235 results on 1650 pages for 'source control bindings'.

Page 373/1650 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Windows Forms: Enable/Disable WS_CLIPCHILDREN

    - by Agnel Kurian
    How do I turn on/off the WS_CLIPCHILDREN window style in a Windows Forms parent control? I would like to display some text on top of the child control after it has painted. In my parent control, this is what I have: class Parent : public Control { void Parent::OnPaint(PaintEventArgs ^e){ Control::OnPaint(e); // parent draws here // some drawing should happen over the child windows // in other words, do not clip child window regions } }; On checking with Spy++ I find that the parent has the WS_CLIPCHILDREN window style enabled by default. What is the Windows Forms way to turn this off? Note: Sample code is in C++/CLI but I have tagged this C# for visibility... language is immaterial here. Feel free to translate the code to C#.

    Read the article

  • Using Regex, how can I remove certain characters from the inside of tags in a string of html?

    - by Iain Fraser
    Suppose I have a string of html that contains a bunch of control characters and I want to remove the control characters from inside tags only, leaving the characters outside the tags alone. For example Here the control character is the numeral "1". Input The quick 1<strong>orange</strong> lemming <sp11a1n 1class1='jumpe111r'11>jumps over</span> 1the idle 1frog Desired Output The quick 1<strong>orange</strong> lemming <span class='jumper'>jumps over</span> 1the idle 1frog So far I can match tags which contain the control character but I can't remove them in one regex. I guess I could perform another regex on my matches, but I'd really like to know if there's a better way. My regex Bear in mind this one only matches tags which contain the control character. <(([^>])*?`([^>])*?)*?> Thanks very much for your time and consideration. Iain Fraser

    Read the article

  • How to add correct cancellation when downloading a file with the example in the samples of the new P

    - by Mike
    Hello everybody, I have downloaded the last samples of the Parallel Programming team, and I don't succeed in adding correctly the possibility to cancel the download of a file. Here is the code I ended to have: var wreq = (HttpWebRequest)WebRequest.Create(uri); // Fire start event DownloadStarted(this, new DownloadStartedEventArgs(remoteFilePath)); long totalBytes = 0; wreq.DownloadDataInFileAsync(tmpLocalFile, cancellationTokenSource.Token, allowResume, totalBytesAction => { totalBytes = totalBytesAction; }, readBytes => { Log.Debug("Progression : {0} / {1} => {2}%", readBytes, totalBytes, 100 * (double)readBytes / totalBytes); DownloadProgress(this, new DownloadProgressEventArgs(remoteFilePath, readBytes, totalBytes, (int)(100 * readBytes / totalBytes))); }) .ContinueWith( (antecedent ) => { if (antecedent.IsFaulted) Log.Debug(antecedent.Exception.Message); //Fire end event SetEndDownload(antecedent.IsCanceled, antecedent.Exception, tmpLocalFile, 0); }, cancellationTokenSource.Token); I want to fire an end event after the download is finished, hence the ContinueWith. I slightly changed the code of the samples to add the CancellationToken and the 2 delegates to get the size of the file to download, and the progression of the download: return webRequest.GetResponseAsync() .ContinueWith(response => { if (totalBytesAction != null) totalBytesAction(response.Result.ContentLength); response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); }, ct); I had to add the call to the Wait function, because if I don't, the method exits and the end event is fired too early. Here are the modified method extensions (lot of code, apologies :p) public static Task WriteAllBytesAsync(this Stream stream, string filePath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (stream == null) throw new ArgumentNullException("stream"); // Copy from the source stream to the memory stream and return the copied data return stream.CopyStreamToFileAsync(filePath, ct, resumeDownload, progressAction); } public static Task CopyStreamToFileAsync(this Stream source, string destinationPath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destinationPath == null) throw new ArgumentNullException("destinationPath"); // Open the output file for writing var destinationStream = FileAsync.OpenWrite(destinationPath); // Copy the source to the destination stream, then close the output file. return CopyStreamToStreamAsync(source, destinationStream, ct, progressAction).ContinueWith(t => { var e = t.Exception; destinationStream.Close(); if (e != null) throw e; }, ct, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Current); } public static Task CopyStreamToStreamAsync(this Stream source, Stream destination, CancellationToken ct, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destination == null) throw new ArgumentNullException("destination"); return Task.Factory.Iterate(CopyStreamIterator(source, destination, ct, progressAction)); } private static IEnumerable<Task> CopyStreamIterator(Stream input, Stream output, CancellationToken ct, Action<long> progressAction = null) { // Create two buffers. One will be used for the current read operation and one for the current // write operation. We'll continually swap back and forth between them. byte[][] buffers = new byte[2][] { new byte[BUFFER_SIZE], new byte[BUFFER_SIZE] }; int filledBufferNum = 0; Task writeTask = null; int readBytes = 0; // Until there's no more data to be read or cancellation while (true) { ct.ThrowIfCancellationRequested(); // Read from the input asynchronously var readTask = input.ReadAsync(buffers[filledBufferNum], 0, buffers[filledBufferNum].Length); // If we have no pending write operations, just yield until the read operation has // completed. If we have both a pending read and a pending write, yield until both the read // and the write have completed. yield return writeTask == null ? readTask : Task.Factory.ContinueWhenAll(new[] { readTask, writeTask }, tasks => tasks.PropagateExceptions()); // If no data was read, nothing more to do. if (readTask.Result <= 0) break; readBytes += readTask.Result; if (progressAction != null) progressAction(readBytes); // Otherwise, write the written data out to the file writeTask = output.WriteAsync(buffers[filledBufferNum], 0, readTask.Result); // Swap buffers filledBufferNum ^= 1; } } So basically, at the end of the chain of called methods, I let the CancellationToken throw an OperationCanceledException if a Cancel has been requested. What I hoped was to get IsFaulted == true in the appealing code and to fire the end event with the canceled flags and the correct exception. But what I get is an unhandled exception on the line response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); telling me that I don't catch an AggregateException. I've tried various things, but I don't succeed to make the whole thing work properly. Does anyone of you have played enough with that library and may help me? Thanks in advance Mike

    Read the article

  • Rails paperclip problem

    - by palani
    I have uploaded the video into my rails application by using thoughtbot-paperclip then the video is converted into "flv" format by using ffmpeg. For your reference here I specified some of my model sample code: model.rb: has_attached_file :source,:styles => {:thumb => "137x85>" } If i specified :url or :path option it doesn't worked correctly. In my view I played my video by using the following line: <%= @model.source.url.gsub(/\?.*/,'')%> If i use <%= @model.source.url%>, the video is not played. When do the puts for video url it shows me the video URL as /source/original/sample/sample.fly?22000009. I knew that the last portion is a timestamp, but i want to use <%= @model.source.url%>. What's my mistake here can any one correct me please?

    Read the article

  • How can I allow users to switch data sources for an SSRS report?

    - by fatcat1111
    I have two SQL Server databases with identical schemas, but different data. I also have SSRS generating reports, in native mode, for one of the databases. All reports the same shared data source. I would like to allow users to get reports for the other database. I created a second shared data source for the second database. Modifying the reports to use this second data source results in reports as expected. Because the RDLs are the same, except for the data source, and because I don't want to maintain what are basically duplicate reports, I'm looking for a way to dynamically switch data sources, depending on user input. Is there an easy means of accomplishing this? An existing solution would be best. Barring that, can the RDL's data source be parametrized? Or, can the RDS's connection string be parametrized?

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • ruby-gstreamer doesn't send EOS message

    - by Cheba
    I've managed to make it play sound but it never gets EOS message. And thus script never exits. require 'gst' main_loop = GLib::MainLoop.new pipeline = Gst::Pipeline.new "audio-player" source = Gst::ElementFactory.make "filesrc", "file-source" source.location = "/usr/share/sounds/gnome/default/alerts/bark.ogg" decoder = Gst::ElementFactory.make "decodebin", "decoder" conv = Gst::ElementFactory.make "audioconvert", "converter" sink = Gst::ElementFactory.make "alsasink", "output" pipeline.add source, decoder, conv, sink source >> decoder conv >> sink decoder.signal_connect "pad-added" do |element, pad, data| pad >> conv['sink'] end pipeline.bus.add_watch do |bus, message| puts "Message: #{message.inspect}" case message.type when Gst::Message::Type::ERROR puts message.structure["debug"] main_loop.quit when Gst::Message::Type::EOS puts 'End of stream' main_loop.quit end end pipeline.play begin puts 'Running main loop' main_loop.run ensure puts 'Shutting down main loop' pipeline.stop end

    Read the article

  • Clipping the caret in a C# program

    - by Bob Sammers
    I'm creating a WinForms control in C# (using VS2008, .net 3.5) which allows text input. I've imported the necessary Win32 API functions from User32.dll for displaying the normal Windows caret and these are all working fine, but it's not displaying exactly how I'd like it. Text is displayed on the control with a blank border and I use Graphics.SetClip() to leave this margin clear. I want the caret to be clipped to the same region, but since I don't paint it and there's no obvious API function to set a clipping region, I can't see any way of doing this. Have I missed anything obvious? The caret is clipped inside the control in which it is drawn. I'm therefore aware that one solution could be to place the text in a separate sub-control with no border. However, if there's a simpler way than redesigning this part of the control, I'd like to look for that first. Thanks in advance for any help!

    Read the article

  • running an RMI server in command line and eclipse

    - by Noona
    I need to run my RMI server using the command line, my class files reside in this folder: C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\RmiServerClasses in package hw2.rmi.server The code base reside in this folder: C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\RmiServerCodeBase in package hw2.rmi.server I use the command line: java –classpath C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\RmiServerClasses\ -Djava.rmi.server.codebase=file:/C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer\ Djava.security.policy=c:\HW2\permissions.policy hw2.rmi.server.RmiEncodingServer but I get a "class not found" exception as follows: Exception in thread "main" java.lang.NoClassDefFoundError: ûclasspath Caused by: java.lang.ClassNotFoundException: ûclasspath at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Could not find the main class: GÇôclasspath. Program will exit. where have I gone wrong? also, if you can provide instructions on how to run the server in eclipse, I added the following as a VM argument, but I get a class not found exception to a class that is in the RmiServerCodeBase: -Djava.security.policy=C:\workspace\distributedhw2\permissions.policy -Djava.rmi.server.codebase=file:/C:\workspace\distributedhw2\AgencyServers\RmiEncodingServer thanks

    Read the article

  • Datetimepicker in windows 7

    - by maramasamy
    I am using Microsoft Date and Time Picker control(SP4) in my application.My application runs fine in xp and vista machines as date and time picker refers to mscomct2.ocx which is present in system32 for xp and vista,but for windows 7 i dont have this dll in my system .In windows we need admin rights to register that control. So is there any alternative/similar control available.......It would be helpful if you could let me know why thats not available in windows P.S : My Application is a .Net application and the reason why i have used Microsoft Date and Time Picker control(SP4) instead of .Net provided control is to display year 9999.

    Read the article

  • Project of Projects with team Foundation Server 2010

    - by Martin Hinshelwood
    It is pretty much accepted that you should use Areas instead of having many small Team Projects when you are using Team Foundation Server 2010. I have implemented this scenario many times and this is the current iteration of layout and considerations. If like me you work with many customers you will find that you get into a grove for how to set these things up to make them as easily understandable for everyone, while giving the best functionality. The trick is in making it as intuitive as possible for both you and the developers that need to work with it. There are five main places where you need to have the Product or Project name in prominence of any other value. Area Iteration Source Code Work Item Queries Build Once you decide how you are doing this in each of these places you need to keep to it religiously. Evan if you have one source code file to keep, make sure it is in the right place. This makes your developers and others working with the format familiar with where everything should go, as well as building up mussel memory. This prevents the neat system degenerating into a nasty mess. Areas Areas are traditionally used to separate out parts of your product / project so that you can see how much effort has gone into each. Figure: The top level areas are for reporting and work item separation There are massive advantages of using this method. You can: move work from one project to another rename a project / product It is far more likely that a project or product gets renamed than a department. Tip: If you have many projects, over 100, you should consider categorising them here, but make sure that the actual project name always sits at the same level so you know which is which. Figure: Always keep things that are the same at the same level Note: You may use these categories only at the Area/Iteration level to make it easier to select on drop down lists. You may not want to use them everywhere. On the other hand, for consistency it would be better to. Iterations Iterations are usually used to some sort of time based consideration. Here I am splitting into Iterations with periodic releases. Figure: Each product needs to be able to have its own cadence The ability to have each project run at its own pace and to enable them to have their own release schedule is often of paramount importance and you don’t want to fix your 100+ projects to all be released on the same date. Source Code Having a good structure for your source even if you are not branching or having multiple products under the same structure is always a good idea. Figure: Separate out your products source You need to think about both your branches as well as the structure of your source. All your code should be under “Source” and everything you need to build your solution including Build Scripts and 3rd party tools should be under your “Main” (branch) folder. This should them be branched by “Quality”, “Release” or both to get the most out of your branching structure. The important thing is to make sure you branch (or be able to branch) everything you need to build, test and deploy your application to an environment. That environment may be development, test or even production, but I can’t stress the importance of having everything your need. Note: You usually will not be able to install custom software on your build server. Store any *.dll’s or *.exe’s that you need under the “Tools\Tool1” folder. Note: Consult the Branching Guidance for Team Foundation Server 2010 for more on branching Figure: Adding category may be a necessary evil Even if you have to have a couple of categories called “Default”, it is better than not knowing the difference between a folder, Product and Branch. Work Item Queries Queries are used to load lists of Work Items out of TFS so you can see what work you have. This means that you want to also separate queries out by Product / project to make it easier to Figure: Again you have the same first level structure Having Folders also in Work Item Tracking we do the same thing. We put all the queries under a folder named for the Product / Project and change each query to have “AreaPath=[TeamProject]\[ProductX]” in the query instead of the standard “Project=@Project”. Tip: Don’t have a folder with new queries for each iteration. Instead have a single “Current” folder that has queries that point to the current iteration. Just change the queries as you move from one iteration to another. Tip: You can ctrl+drag the “Product1” folder to create your “Product2” folder. Builds You may have many builds both for individual products but also for different quality's. This can be further complicated by having some builds that action “Gated Check-In” and others that are specifically for “Release”, “Test” or another purpose. Figure: There are no folders, yet, for the builds so you need a good naming convention Its a pity that there are no folders under builds, some way to categorise would be nice. In lue of that at the moment you can use a functional naming convention that at least allows you to find what you want. Conclusion It is really easy to both achieve and to stick to this format if you take the time to do it. Unless you have 1000+ builds or 100+ Products you are unlikely run into any issues. Even then there are things you can do to mitigate the issues and I have describes some of them above. Let me know if you can think of any other things to make this easier.

    Read the article

  • Evaluate ContentControl without rendering to screen

    - by thedesertfox
    I have a datagrid and I'm writing a method to search through it to find some text. Practically all of my columns use a DataTemplateSelector, so in my search, I need to be able to take a DataTemplate, apply it to a ContentControl, and then find a TextBlock to get the text to see if it matches my search criteria. I'm trying the following but it's not seeming to produce any results. I also tried a FindName("layoutRoot" control) but that came back as null as well. var control = new ContentControl(); control.ContentTemplate = dataTemplate; control.Content = item; var txtBox = control.FindChildren<TextBlock>();

    Read the article

  • Iterating through tooltips

    - by icemanind
    Guys, I have a windows form with a panel control and inside the panel control are several other controls with a System.Windows.Forms.Tooltip attached to them. How can I iterate through each tooltip and set the Active property of the tooltip to false? Tooltips, unlike other controls, are not actually controls. So I had this: foreach (System.Windows.Forms.Control ctrl in this.pnlControl.Controls) { if (ctrl.Name.StartsWith("tt")) // since all my tooltip names start with 'tt' { System.Windows.Forms.ToolTip TipControl=(System.Windows.Forms.ToolTip)ctrl; TipControl.Active=false; } } This does not work though. It gets an error because the ToolTip control is not inherited from System.Windows.Forms.Control. Any ideas?

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Mimic property/list changes on an object on another object

    - by soundslike
    I need to mimic changes (property/list) changes on an object and then apply it to another object to keep the structure/property the same. In essence it's like cloning etc. the biz rules require certain properties to not be applied to the other object, so I can't just clone the object otherwise this would be easy. I've already walked the source object to get INotifyPropertyChanged and IListChanged events, so I have the "source" and the args (Property or List) changed event notifications. Given that I guess I could build a reflection "hierarchy path" starting from the top level of the source object to get to the Property or List changed "source" (which could be several levels deep). Ignoring for the moment that certain object properties should not propagate to the other object, what's a way to build this "path"? Is a brute force top level down to build the "path" (and discard on the way back up if we don't hit the original changed event "source") the only way to do it? Any clever ideas on how to mimic changes from one object to another object?

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Drag and drop between HTML 5 and Silverlight 4

    - by Tej
    Hi - Is it possible to drag an HTML 5 object, e.g. an , into a Silverlight 4 control and accept it? We've attempted to build a simple prototype using an HTML 5 example and a trivial Silverlight control but the cursor changes to the no-entry sign whenever we hover over the Silverlight control. We do, however, get drag entry events firing in Silverlight. Our control happily accepts files dragged from the desktop as expected. We think we've got the HTML 5 drag events set up correctly, and I can possibly get our test published somewhere in case that will help. We've successfully implemented dragging inside a Silverlight control but we now need to integrate with non-Silverlight page components. Is this actually possible to set up or are we just doing something wrong? Thanks for any advice!

    Read the article

  • Triangle Picking Picking Back faces

    - by Tangeleno
    I'm having a bit of trouble with 3D picking, at first I thought my ray was inaccurate but it turns out that the picking is happening on faces facing the camera and faces facing away from the camera which I'm currently culling. Here's my ray creation code, I'm pretty sure the problem isn't here but I've been wrong before. private uint Pick() { Ray cursorRay = CalculateCursorRay(); Vector3? point = Control.Mesh.RayCast(cursorRay); if (point != null) { Tile hitTile = Control.TileMesh.GetTileAtPoint(point); return hitTile == null ? uint.MaxValue : (uint)(hitTile.X + hitTile.Y * Control.Generator.TilesWide); } return uint.MaxValue; } private Ray CalculateCursorRay() { Vector3 nearPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 0f)); Vector3 farPoint = Control.Camera.Unproject(new Vector3(Cursor.Position.X, Control.ClientRectangle.Height - Cursor.Position.Y, 1f)); Vector3 direction = farPoint - nearPoint; direction.Normalize(); return new Ray(nearPoint, direction); } public Vector3 Camera.Unproject(Vector3 source) { Vector4 result; result.X = (source.X - _control.ClientRectangle.X) * 2 / _control.ClientRectangle.Width - 1; result.Y = (source.Y - _control.ClientRectangle.Y) * 2 / _control.ClientRectangle.Height - 1; result.Z = source.Z - 1; if (_farPlane - 1 == 0) result.Z = 0; else result.Z = result.Z / (_farPlane - 1); result.W = 1f; result = Vector4.Transform(result, Matrix4.Invert(ProjectionMatrix)); result = Vector4.Transform(result, Matrix4.Invert(ViewMatrix)); result = Vector4.Transform(result, Matrix4.Invert(_world)); result = Vector4.Divide(result, result.W); return new Vector3(result.X, result.Y, result.Z); } And my triangle intersection code. Ripped mainly from the XNA picking sample. public float? Intersects(Ray ray) { float? closestHit = Bounds.Intersects(ray); if (closestHit != null && Vertices.Length == 3) { Vector3 e1, e2; Vector3.Subtract(ref Vertices[1].Position, ref Vertices[0].Position, out e1); Vector3.Subtract(ref Vertices[2].Position, ref Vertices[0].Position, out e2); Vector3 directionCrossEdge2; Vector3.Cross(ref ray.Direction, ref e2, out directionCrossEdge2); float determinant; Vector3.Dot(ref e1, ref directionCrossEdge2, out determinant); if (determinant > -float.Epsilon && determinant < float.Epsilon) return null; float inverseDeterminant = 1.0f/determinant; Vector3 distanceVector; Vector3.Subtract(ref ray.Position, ref Vertices[0].Position, out distanceVector); float triangleU; Vector3.Dot(ref distanceVector, ref directionCrossEdge2, out triangleU); triangleU *= inverseDeterminant; if (triangleU < 0 || triangleU > 1) return null; Vector3 distanceCrossEdge1; Vector3.Cross(ref distanceVector, ref e1, out distanceCrossEdge1); float triangleV; Vector3.Dot(ref ray.Direction, ref distanceCrossEdge1, out triangleV); triangleV *= inverseDeterminant; if (triangleV < 0 || triangleU + triangleV > 1) return null; float rayDistance; Vector3.Dot(ref e2, ref distanceCrossEdge1, out rayDistance); rayDistance *= inverseDeterminant; if (rayDistance < 0) return null; return rayDistance; } return closestHit; } I'll admit I don't fully understand all of the math behind the intersection and that is something I'm working on, but my understanding was that if rayDistance was less than 0 the face was facing away from the camera, and shouldn't be counted as a hit. So my question is, is there an issue with my intersection or ray creation code, or is there another check I need to perform to tell if the face is facing away from the camera, and if so any hints on what that check might contain would be appreciated.

    Read the article

  • How to use SQLite3 with Java

    - by Bruce
    I am trying to build a simple java program which creates a db file, then a table and inserts dummy values in the table. I found this page http://www.zentus.com/sqlitejdbc/index.html and tried out the example given on the page but I am getting the following error - Exception in thread "main" java.lang.NoClassDefFoundError: Test Caused by: java.lang.ClassNotFoundException: Test at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Could not find the main class: Test. Program will exit.

    Read the article

  • Why do I get an exception when playing multiple sound instances?

    - by Boreal
    Right now, I'm adding a rudimentary sound engine to my game. So far, I am able to load in a WAV file and play it once, then free up the memory when I close the game. However, the game crashes with a nice ArgumentOutOfBoundsException when I try to play another sound instance. Specified argument was out of the range of valid values. Parameter name: readLength I'm following this tutorial pretty much exactly, but I still keep getting the aforementioned error. Here's my sound-related code. /// <summary> /// Manages all sound instances. /// </summary> public static class Audio { static XAudio2 device; static MasteringVoice master; static List<SoundInstance> instances; /// <summary> /// The XAudio2 device. /// </summary> internal static XAudio2 Device { get { return device; } } /// <summary> /// Initializes the audio device and master track. /// </summary> internal static void Initialize() { device = new XAudio2(); master = new MasteringVoice(device); instances = new List<SoundInstance>(); } /// <summary> /// Releases all XA2 resources. /// </summary> internal static void Shutdown() { foreach(SoundInstance i in instances) i.Dispose(); master.Dispose(); device.Dispose(); } /// <summary> /// Registers a sound instance with the system. /// </summary> /// <param name="instance">Sound instance</param> internal static void AddInstance(SoundInstance instance) { instances.Add(instance); } /// <summary> /// Disposes any sound instance that has stopped playing. /// </summary> internal static void Update() { List<SoundInstance> temp = new List<SoundInstance>(instances); foreach(SoundInstance i in temp) if(!i.Playing) { i.Dispose(); instances.Remove(i); } } } /// <summary> /// Loads sounds from various files. /// </summary> internal class SoundLoader { /// <summary> /// Loads a .wav sound file. /// </summary> /// <param name="format">The decoded format will be sent here</param> /// <param name="buffer">The data will be sent here</param> /// <param name="soundName">The path to the WAV file</param> internal static void LoadWAV(out WaveFormat format, out AudioBuffer buffer, string soundName) { WaveStream wave = new WaveStream(soundName); format = wave.Format; buffer = new AudioBuffer(); buffer.AudioData = wave; buffer.AudioBytes = (int)wave.Length; buffer.Flags = BufferFlags.EndOfStream; } } /// <summary> /// Manages the data for a single sound. /// </summary> public class Sound : IAsset { WaveFormat format; AudioBuffer buffer; /// <summary> /// Loads a sound from a file. /// </summary> /// <param name="soundName">The path to the sound file</param> /// <returns>Whether the sound loaded successfully</returns> public bool Load(string soundName) { if(soundName.EndsWith(".wav")) SoundLoader.LoadWAV(out format, out buffer, soundName); else return false; return true; } /// <summary> /// Plays the sound. /// </summary> public void Play() { Audio.AddInstance(new SoundInstance(format, buffer)); } /// <summary> /// Unloads the sound from memory. /// </summary> public void Unload() { buffer.Dispose(); } } /// <summary> /// Manages a single sound instance. /// </summary> public class SoundInstance { SourceVoice source; bool playing; /// <summary> /// Whether the sound is currently playing. /// </summary> public bool Playing { get { return playing; } } /// <summary> /// Starts a new instance of a sound. /// </summary> /// <param name="format">Format of the sound</param> /// <param name="buffer">Buffer holding sound data</param> internal SoundInstance(WaveFormat format, AudioBuffer buffer) { source = new SourceVoice(Audio.Device, format); source.BufferEnd += (s, e) => playing = false; source.Start(); source.SubmitSourceBuffer(buffer); // THIS IS WHERE THE EXCEPTION IS THROWN playing = true; } /// <summary> /// Releases memory used by the instance. /// </summary> internal void Dispose() { source.Dispose(); } } The exception occurs on line 156 when I am playing the sound: source.SubmitSourceBuffer(buffer);

    Read the article

  • dozer custom convertors for primitive types

    - by koti
    the below url has an example on dozer custom convertors.. http://stackoverflow.com/questions/1931212/map-collection-size-in-dozer but when i tried that example its giving the exception like this.. Type: null Source parent class: dozerPackage.Source Source field name: images Source field type: class java.util.ArrayList Source field value: [www, eee] Dest parent class: dozerPackage.Destination Dest field name: numOfImages Dest field type: int org.dozer.MappingException: Destination Type (int) is not accepted by this Custom Converter (dozerPackage.TestCustomFieldConverter)! is there any way that i can return the primitive types from dozer custom convertors..

    Read the article

  • Multiple ASP.Net Controls On One Page

    - by Duracell
    We have created a User Control for ASP.Net. At any one time, a user's profile page could contain between one and infinity of these controls. The Control.ascx file contains quite a bit of javascript. When the control is rendered by .Net to HTML, you notice that it prints the javascript for each control. This was expected. I'd like to reduce the amount of HTML output by the server to increase page load times. Normally, you could just move the javascript to an external file and then you only need one extra HTTP request which will serve for all controls. But what about instances in the javascript where we have something like document.getElementById('<%= txtTextBox.ClientID %'); How would the javascript know which user control work with? Has anyone done something like this, or is the solution staring me in the face?

    Read the article

  • How to get at ResourceDictionary style when it is loaded from external xap and assemblies are MEF-fe

    - by user158503
    I've got the following setup: The main application loads a XAP with an IPlugin implementation. The Plugin contains a 'DisplayPanel' that contains a referenced Control with other controls. The DisplayPanel here is simply a container control to show referenced Control. This referenced Control, from an assembly, uses a Style from a ResourceDictionary xaml in this assembly. At least that's what I want to have. The problem is that the referenced Control throws an error: Cannot find a Resource with the Name/Key PlayerPanelGrad [Line: 1500 Position: 127] I've tried to get at the style by referencing the ResourceDictionary through a Merged Resource dictionary reference: <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="TableControls;component/ControlsStyle.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> But that doesn't work. How would you approch this?

    Read the article

  • Rails 4.2: "Assets should not be requested directly without their digests"

    - by Nowaker
    On Rails 4.2.0.beta1 I get an error: Assets should not be requested directly without their digests: Use the helpers in ActionView::Helpers to request fonts/source-sans-pro.woff The stylesheet: @font-face { font-family: 'Source Sans Pro'; font-style: normal; font-weight: 400; src: local('Source Sans Pro'), local('SourceSansPro-Regular'), url(/assets/source-sans-pro.woff) format('woff'); } The configuration is: config.serve_static_assets = true config.assets.js_compressor = :uglifier config.assets.compile = true config.assets.digest = true config.assets.version = '1.0' config.assets.paths << Rails.root.join('app', 'assets', 'fonts') config.assets.precompile += %w(.svg .eot .woff .ttf) Sure I can disable digests and it works again, but I'm interested in using them. Therefore, how do I make use of digests when I need to request source-sans-pro.woff? Please note that I place the fonts in assets/fonts directory, not the public/ directory. I don't see a difference between images and fonts, so I want to keep them under the same directory - app/assets.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >