Search Results

Search found 4283 results on 172 pages for 'initial'.

Page 109/172 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Ext JS Tab Panel - Dynamic Tabs - Tab Exists Not Working

    - by Joey Ezekiel
    Hi Would appreciate if somebody could help me on this. I have a Tree Panel whose nodes when clicked load a tab into a tab panel. The tabs are loading alright, but my problem is duplication. I need to check if a tab exists before adding it to the tab panel. I cant seem to have this resolved and it is eating my brains. This is pretty simple and I have checked stackoverflow and the EXT JS Forums for solutions but they dont seem to work for me or I'm being blind. This is my code for the tree: var opstree = new Ext.tree.TreePanel({ renderTo: 'opstree', border:false, width: 250, height: 'auto', useArrows: false, animate: true, autoScroll: true, dataUrl: 'libs/tree-data.json', root: { nodeType: 'async', text: 'Tool Actions' }, listeners: { render: function() { this.getRootNode().expand(); } } }) opstree.on('click', function(n){ var sn = this.selModel.selNode || {}; // selNode is null on initial selection renderPage(n.id); }); function renderPage(tabId) { var TabPanel = Ext.getCmp('content-tab-panel'); var tab = TabPanel.getItem(tabId); //Ext.MessageBox.alert('TabGet',tab); if(tab){ TabPanel.setActiveTab(tabId); } else{ TabPanel.add({ title: tabId, html: 'Tab Body ' + (tabId) + '', closable:true }).show(); TabPanel.doLayout(); } } }); and this is the code for the Tab Panel new Ext.TabPanel({ id:'content-tab-panel', region: 'center', deferredRender: false, enableTabScroll:true, activeTab: 0, items: [{ contentEl: 'about', title: 'About the Billing Ops Application', closable: true, autoScroll: true, margins: '0 0 0 0' },{ contentEl: 'welcomescreen', title: 'PBRT Application Home', closable: false, autoScroll: true, margins: '0 0 0 0' }] }) Can somebody please help?

    Read the article

  • Refresh RadGridview when Insert,Update and Delete Operation done on Database in WPF

    - by patelriki13
    WPF and C#: Problem: 1. How to Refresh Radgridview when i Insert,update and Delete Record in database anrecord. 2.when i am Insert or Update Record than in radgridview that row is selected. i am useing sql server 2005. i am use to set data source of radgridview like " radgridview1.ItemsSource = ds; " == ds is dataset. i am beginner so if possible than tel me by code it is easy to understand....... can u help me as early as possible .... i give some code which i am useing for update RadGridview con.ConnectionString = @"Data Source=(local);Initial Catalog=DigiDms;Integrated Security=True"; cmd1.Connection = con; con.Open(); cmd1.CommandType = CommandType.StoredProcedure; cmd1.CommandText = "Pro_Insurance_Master_Select"; da1.SelectCommand = cmd1; da1.Fill(ds1); con.Close(); //dataGrid.clear(); //dsGrid.Reset(); //dsGrid = dataGrid.GetData("Pro_Insurance_Master_Select"); //set datasource of gridview gridShowData.ItemsSource = null; gridShowData.ItemsSource = ds1; doing this , when i am delete or update record than folloning error generated... Error: "Object reference not set to an object" when i am doing the "gridShowData.ItemsSource = null;" and when i am doing insert operation than this error is not generated and RadGridview also updated..... so pls help me as early as possible.... i am beginer ........ my email address is [email protected]

    Read the article

  • DataBinding a dictionary to a combobox in WPF.

    - by Ashish Ashu
    I have a Dictionary which is binded to a combobox. I have used dictionary to provide spaces in enum. public enum Option {Enter_Value, Select_Value}; Dictionary<Option,string> Options; <ComboBox x:Name="optionComboBox" SelectionChanged="optionComboBox_SelectionChanged" SelectedValuePath="Key" DisplayMemberPath="Value" SelectedItem="{Binding Path = SelectedOption}" ItemsSource="{Binding Path = Options}" /> This works fine. My queries: 1. I am not able to set the initial value to a combo box. In above XAML snippet the line SelectedItem="{Binding Path = SelectedOption}" is not working. I have declared SelectOption in my viewmodel. This is of type string and I have intialized this string value in my view model as below: SelectedOption = Options[Options.Enter_Value].ToString(); 2. I want if user selects Options.Enter_Value from the combo box , the combo box becomes editable and user can enter the numeric value in it. If user selects the second option Options.Select_Value then a dialog is poped up which allows user to select the predifined values. And the combo box becomes read only and shows the selected value. Please Help!!

    Read the article

  • Jquery image blur effect with in a div?

    - by bala3569
    Consider i have three images and one bannerDiv.... On initial page load i should show the first image and after sometimeout say 300ms i must show the second image and vise versa.... I have to blur the first image and show second image .... Any suggestion how it can be done with jquery... <div Id="BannerDiv" style="display:none;"> <img src="mylocation" alt="image1"/> <img src="mylocation" alt="image2"/> <img src="mylocation" alt="image3"/> </div> and my jquery function is, <script type="text/javascript"> $(document).ready(function() { //how to show first image and blur it to show second image after 300 ms }); </script> EDIT: 1st image to fade out after 300ms and show 2nd image 2nd image to fade out after 300ms and show 3rd image 3rd image to fade out after 300ms and show 1st image....

    Read the article

  • C# Can I return HttpWebResponse result to iframe - Uses Digest authentication

    - by chadsxe
    I am trying to figure out a way to display a cross-domain web page that uses Digest Authentication. My initial thought was to make a web request and return the entire page source. I currently have no issues with authenticating and getting a response but I am not sure how to properly return the needed data. // Create a request for the URL. WebRequest request = WebRequest.Create("http://some-url/cgi/image.php?type=live"); // Set the credentials. request.Credentials = new NetworkCredential(username, password); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Get the stream containing content returned by the server. Stream dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Clean up the streams and the response. reader.Close(); dataStream.Close(); response.Close(); return responseFromServer; My problems are currently... responseFromServer is not returning the entire source of the page. I.E. missing body and head tags The data is encoded improperly in responseFromServer. I believe this has something to do with the transfer encoding being of the type chunked. Further more... I am not entirely sure if this is even possible. If it matters, this is being done in ASP.NET MVC 4 C#. Thanks, Chad

    Read the article

  • TeamCity stopped working once I added NUnit to the mix

    - by Dave
    I'm struggling a lot trying to get our build server going. I am currently running tests in a Windows XP virtual machine, and have installed TeamCity v5.0.3, build 10821. I am using NUnit v2.5.3. I finished the initial setup with TeamCity without any issues at all, provided that I use the sln2008 build runner that makes the entire process almost brainless. It's really quite nice that way, and very satisfying to see your first successful automated build. Now it's time to kick it up a notch and I wanted to get NUnit working. I keep the NUnit 2.5.3 assemblies in an external libs folder in SVN, so I checked that out onto the test system. I selected NUnit 2.5.3 from the build runner options, as the online instructions had recommended. But when I build, I get the following error: Window1.xaml.cs(14,7): error CS0246: The type or namespace name ‘NUnit’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘Test’ could not be found (are you missing a using directive or an assembly reference?) Window1.xaml.cs(28,10): error CS0246: The type or namespace name ‘TestAttribute’ could not be found (are you missing a using directive or an assembly reference?) Everything compiles great in the IDE. From finding blog posts and submitting comments, I got some advice and confirmed the following: I have the HintPath value set properly in my project file (points to the external lib) I can also do a full Release and Debug build from the command line using msbuild I have tried do use the NUnit installer so nunit.framework.dll gets registered into the GAC I have changed the build agent's logon account to be a user on the test system, rather than LOCAL SYSTEM. Nothing seems to help... can anyone else here offer me some advice on what to try next?

    Read the article

  • Managing a file-based public maven repository

    - by Roland Ewald
    I am looking for an easy way to manage a public file-based Maven repository. While we are using the open-source version of Artifactory internally, we now want to put a file-based repository of our published artifacts (and their dependencies) on a separate machine that is publicly available. There are several ways how to do this, but none of them seems ideal: Use Maven Dependency plugin: if it is configured correctly and executed with the goal dependency:copy-dependencies for the release-module of our project, it creates a local repository structure that is fine, but this structure does not contain the meta-data.xml files, nor the hash-sums. Use Artifactory to export repo: AFAIK Artifactory only allows to export a repository as a whole. This would include the non-published modules from our project (which would then need to be deleted manually). Also, all dependencies are sitting in another repository, so this needs to be done twice, and many dependencies are not even required by a published artifact (only by artifacts that are still for internal use only). Nevertheless, this method would also include the meta-data.xml files and the hash-sums for all files. To set up an initial version of the repository, I used a mixture of both methods: I first created the Maven repository for all required dependencies via dependency:copy-dependencies and then wrote a script to cherry-pick the meta-data.xml files (etc.) from Artifactory. This is terribly cumbersome, isn't there a better way to solve this? Maybe there is another Maven 3 - plugin that I am unaware of, or some other command-line tool that does the job? I basically just need a simple way to create a Maven repository that contains all artifacts a given artifact depends on (and no more), and also contains all meta-data expected in a remote repository. Any ideas?

    Read the article

  • DomainDataSource DataPager with silverlight 3 DataGrid & .Net RIA Services

    - by Dennis Ward
    I have a simple datagrid example with silverlight 3, and am populating it with the .NET ria services using a DomainDataSource along with a DataPager declaratively (nothing in the code-behind), and am experiencing this problem: The LoadSize is 30, and the Page size is 15, and when the page is loaded, the 1st and 2nd page appear correctly, but when I go beyond the 2nd page, nothing shows up in the grid. This used to work in the silverlight 3 beta with the Mix 2009 preview of .NET Ria services, and I've got a really simple example and have verified that the Service on the web project gets called to load a new batch, but the grid doesn't show any data. Can anyone shed any light as to why grid displays data only for the initial load of data and not subsequent batches from the pager? Here's my xaml: <riaControls:DomainDataSource x:Name="ArtistSource" QueryName="GetArtist" AutoLoad="True" LoadSize="30" PageSize="15"> <riaControls:DomainDataSource.DomainContext> <domain:AdminContext /> </riaControls:DomainDataSource.DomainContext> </riaControls:DomainDataSource> <data:DataGrid Grid.Row="1" x:Name="ArtistDataGrid" ItemsSource="{Binding Data, ElementName=ArtistSource}"> </data:DataGrid> <StackPanel Grid.Row="2"> <data:DataPager Source="{Binding Data, ElementName=ArtistSource}" /> </StackPanel>

    Read the article

  • Exception using Querytables in Excel Automation

    - by sam
    Hi, I'm using Automation to populate a range in Excel using a querytable.. However, when I try to add a querytable to the qorksheet I get an exception. I checked the connection string and its working fine.. Can some one please help me with this?? public void writeproc1() { try { Worksheet ws = (Worksheet)wb.Worksheets.Add(Missing.Value, Missing.Value, Missing.Value, Missing.Value); Range rng = ws.get_Range("A1", "E14"); QueryTable qt = ws.QueryTables.Add("Data Source=(local)\\SQLEXPRESS;initial catalog=temp;Integrated Security=SSPI;", rng, "Select * From Table_1"); qt.RefreshStyle = XlCellInsertionMode.xlInsertEntireRows; qt.Refresh(false); } catch (Exception ex) { Console.WriteLine(ex.ToString()); Console.ReadKey(); } } Exception Thrown System.Runtime.InteropServices.COMException (0x800A03EC): Exception from HRESULT: 0x800A03EC at System.RuntimeType.ForwardCallToInvokeMember(String memberName, BindingFlags flags, Object target, Int32[] aWrapperTypes, MessageData& msgData) at Microsoft.Office.Interop.Excel.QueryTables.Add(Object Connection, Range Destination, Object Sql) at tmp.Program.writeproc1() in ...Projects\tmp\tmp\Program.cs:line 25

    Read the article

  • WPF DataGrid window resize does not resize DataGridColumns

    - by skylap
    I have a WPF DataGrid (from the WPFToolkit package) like the following in my application. <Controls:DataGrid> <Controls:DataGrid.Columns> <Controls:DataGridTextColumn Width="1*" Binding="{Binding Path=Column1}" Header="Column 1" /> <Controls:DataGridTextColumn Width="1*" Binding="{Binding Path=Column2}" Header="Column 2" /> <Controls:DataGridTextColumn Width="1*" Binding="{Binding Path=Column3}" Header="Column 3" /> </Controls:DataGrid.Columns> </Controls:DataGrid> The column width should be automatically adjusted such that all three columns fill the width of the grid, so I set Width="1*" on every column. I encountered two problems with this approach. When the ItemsSource of the DataGrid is null or an empty List, the columns won't size to fit the width of the grid but have a fixed width of about 20 pixel. Please see the following picture: http://img169.imageshack.us/img169/3139/initialcolumnwidth.png When I maximize the application window, the columns won't adapt their size but keep their initial size. See the following picture: http://img88.imageshack.us/img88/9362/columnwidthaftermaximiz.png When I resize the application window with the mouse, the columns won't resize. I was able to solve problem #3 by deriving a sub class from DataGrid and override the DataGrid's OnRenderSizeChanged method as follows. protected override void OnRenderSizeChanged(SizeChangedInfo sizeInfo) { base.OnRenderSizeChanged(sizeInfo); foreach (var column in Columns) { var tmp = column.GetValue(DataGridColumn.WidthProperty); column.ClearValue(DataGridColumn.WidthProperty); column.SetValue(DataGridColumn.WidthProperty, tmp); } } Unfortunately this does not solve problems #1 and #2. How can I get rid of them?

    Read the article

  • NHibernate.Bytecode.UnableToLoadProxyFactoryFactoryException

    - by Shane
    I have the following code set up in my Startup IDictionary properties = new Dictionary(); properties.Add("connection.driver_class", "NHibernate.Driver.SqlClientDriver"); properties.Add("dialect", "NHibernate.Dialect.MsSql2005Dialect"); properties.Add("proxyfactory.factory_class", "NNHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle"); properties.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); properties.Add("connection.connection_string", "Data Source=ZEUS;Initial Catalog=mydb;Persist Security Info=True;User ID=sa;Password=xxxxxxxx"); InPlaceConfigurationSource source = new InPlaceConfigurationSource(); source.Add(typeof(ActiveRecordBase), (IDictionary<string, string>) properties); Assembly asm = Assembly.Load("Repository"); Castle.ActiveRecord.ActiveRecordStarter.Initialize(asm, source); I am getting the following error: failed: NHibernate.Bytecode.UnableToLoadProxyFactoryFactoryException : Unable to load type 'NNHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle' during configuration of proxy factory class. Possible causes are: - The NHibernate.Bytecode provider assembly was not deployed. - The typeName used to initialize the 'proxyfactory.factory_class' property of the session-factory section is not well formed. I have read and read I am referecning the All the assemblies listed and I am at a total loss as what to try next. Castle.ActiveRecord.dll Castle.DynamicProxy2.dll Iesi.Collections.dll log4net.dll NHibernate.dll NHibernate.ByteCode.Castle.dll I am 100% sure the assembly is in the bin. Anyone have any ideas?

    Read the article

  • Reset scale/width/zoom of Safari on iPhone using JavaScript/onorientationchange

    - by dwarbi
    I am displaying different content depending on how the user is holding his/her phone using the onorientationchange call in the body tag. This works great - I hide one div while making the other visible. The div in portrait mode looks great on first load. I use this to get the right scale/zoom: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0;" /> Even if the content in portrait mode run over, the width is correct and the user can scroll down. The display in landscape mode is perfect too. However, if content in landscape mode requires the user the scroll down, then when the user returns to portrait mode, the screen is "zoomed out" so to speak. This happens whether or not the user scrolled down while in landscape mode. I've tried many different things to try to get the scale/zoom/width of the screen right, but no luck. Is there any way to do this? Thanks in advance!

    Read the article

  • Silverlight 4. Activator.CreateInstance uses a huge amount of memory

    - by Marco
    Hi, I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue? Thans in advance Marco

    Read the article

  • How can I force Javascript garbage collection in IE? IE is acting very slow after AJAX calls & DOM

    - by RenderIn
    I have a page with chained drop-downs. Choosing an option from the first select populates the second, and choosing an option from the second select returns a table of matching results using the innerHtml function on an empty div on the page. The problem is, once I've made my selections and a considerable amount of data is brought onto the page, all subsequent Javascript on the page runs exceptionally slowly. It seems as if all the data I pulled back via AJAX to populate the div is still hogging a lot of memory. I tried setting the return object which contains the AJAX results to null after calling innerHtml but with no luck. Firefox, Safari, Chrome and Opera all show no performance degradation when I use Javascript to insert a lot of data into the DOM, but in IE it is very apparent. To test that it's a Javascript/DOM issue rather than a plain old IE issue, I created a version of the page that returns all the results on the initial load, rather than via AJAX/Javascript, and found IE had no performance problems. FYI, I'm using jQuery's jQuery.get method to execute the AJAX call. EDIT This is what I'm doing: <script type="text/javascript"> function onFinalSelection() { var searchParameter = jQuery("#second-select").val(); jQuery.get("pageReturningAjax.php", {SEARCH_PARAMETER: searchParameter}, function(data) { jQuery("#result-div").get(0).innerHtml = data; //jQuery("#result-div").html(data); //Tried this, same problem data = null; }, "html"); } </script>

    Read the article

  • Problem with ScriptManager.RegisterStartupScript in WebControl nested in UpdatePanel

    - by Chris
    Hello, I am having what I believe should be a fairly simple problem, but for the life of me I cannot see my problem. The problem is related to ScriptManager.RegisterStartupScript, something I have used many times before. The scenario I have is that I have a custom web control that has been inserted into a page. The control (and one or two others) are nested inside an UpdatePanel. They are inserted onto the page onto a PlaceHolder: <asp:UpdatePanel ID="pnlAjax" runat="server"> <ContentTemplate> <asp:PlaceHolder ID="placeholder" runat="server"> </asp:PlaceHolder> ... protected override void OnInit(EventArgs e){ placeholder.Controls.Add(Factory.CreateControl()); base.OnInit(e); } This is the only update panel on the page. The control requires some initial javascript be run for it to work correctly. The control calls: ScriptManager.RegisterStartupScript(this, GetType(), Guid.NewGuid().ToString(), script, true); and I have also tried: ScriptManager.RegisterStartupScript(Page, Page.GetType(), Guid.NewGuid().ToString(), script, true); The problem is that the script runs correctly when the page is first displayed, but does not re-run after a partial postback. I have tried the following: Calling RegisterStartupScript from CreateChildControls Calling RegisterStartupScript from OnLoad / OnPreRender Using different combinations of parameters for the first two parameters (in the example above the Control is Page and Type is GetType(), but I have tried using the control itself, etc). I have tried using persistent and new ids (not that I believe this should have a major impact either way). I have used a few breakpoints and so have verified that the Register line is being called correctly. The only thing I have not tried is using the UpdatePanel itself as the Control and Type, as I do not believe the control should be aware of the update panel (and in any case there does not seem to be a good way of getting the update panel?). Can anyone see what I might be doing wrong in the above? Thanks :)

    Read the article

  • Eclipse Blackberry Preprocessor Not Working?

    - by Jessica
    I've already followed the directions @ http://stackoverflow.com/questions/1383277/using-preprocessor-directives-in-blackberry-jde-plugin-for-eclipse for making sure the blackberry plugin preprocessing hook is (theoretically) enabled. I'm using Eclipse 3.5.1 with Blackberry Plugin 1.1 with BB SDKs 4.7.0 and 4.6.0. I have my preprocessor defines set (and I've tried in both the Project's Blackberry Properties as well as the Workspace Blackberry Build settings), and checked their capitalization and spelling carefully too. I'm fairly confident the actual code to say "this stuff should be preprocessed" is good, because including/excluding preprocessed code seems to work fine on command line builds: //#preprocess --- at beginning of file and then code blocks like this throughout: //#ifndef jde_4_7 /* //#endif //#ifdef jde_4_7 import net.rim.device.api.ui.TouchEvent; //#endif //#ifndef jde_4_7 */ //#endif So what I can't figure out what else could be wrong that would cause Eclipse to not compile in my preprocessed code unless I remove the comments that are supposed to prevent the touch code from building into a build for blackberries that don't support touch. At one point it used to work (and no I haven't updated Eclipse), but sometime in the last couple of weeks it seemed to just stop working. And I'm getting kind of tired of the error-prone process of searching for ifdefs and manually commenting/uncommenting touch code and looking for a better solution while I do testing and initial development requiring testing both touch and non-touch functionality. Any other ideas on what could be wrong or how to fix it?

    Read the article

  • Hidden/Shown AsyncFileUpload Control Doesn't Fire Server-Side UploadedComplete Event

    - by Bob Mc
    I recently came across the AsyncFileUpload control in the latest (3.0.40412) release of the ASP.Net Ajax Control Toolkit. There appears to be an issue when using it in a hidden control that is later revealed, such as a <div> tag with visible=false. Example: Page code - <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="act" %> . . . <act:ToolkitScriptManager runat="server" ID="ScriptManager1" /> <asp:UpdatePanel runat="server" ID="upnlFileUpload"> <ContentTemplate> <asp:Button runat="server" ID="btnShowUpload" Text="Show Upload" /> <div runat="server" id="divUpload" visible="false"> <act:AsyncFileUpload runat="server" id="ctlFileUpload" /> </div> </ContentTemplate> </asp:UpdatePanel> Server-side Code - Protected Sub ctlFileUpload_UploadedComplete(ByVal sender As Object, ByVal e As AjaxControlToolkit.AsyncFileUploadEventArgs) Handles ctlFileUpload.UploadedComplete End Sub Protected Sub btnShowUpload_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnShowUpload.Click divUpload.Visible = True End Sub I have a breakpoint on the UploadedComplete event but it never fires. However, if you take the AsyncFileUpload control out of the <div>, making it visible at initial page render, the control works as expected. So, is this a bug within the AsynchUploadControl, or am I not grasping a fundamental concept (which happens regularly)?

    Read the article

  • Problem hosting WebBrowser control in an ATL app.

    - by Andrew Bucknell
    I have a legacy atl app that hosts a webbrowser control in an ATL window. I create an instance of the client to host the browser using the following sequence CComPtr<IOleObject> spOleObject; HRESULT hr = CoCreateInstance(CLSID_WebBrowser, NULL, CLSCTX_INPROC, ID_IOleObject,(void**)&spOleObject); spOleObject->SetClientSite(this); GetClientRect(&rcClient); hr = spOleObject->DoVerb(OLEIVERB_INPLACEACTIVATE, &msg, this, 0, m_hWnd, &rcClient); hr = AtlAdvise(m_spWebBrowser, GetUnknown(), DIID_DWebBrowserEvents2, &m_dwCookie); CComVariant navvar(navurl); m_spWebBrowser->Navigate2(&navvar, NULL, NULL, NULL, NULL); This sequence works fine to create the initial browse window. The call to navigate2 works and if I look at the window via spy++ I have Shell Embedding - Shell DocObject View - Internet Explorer_Server. When a popup occurs (detected through NewWindow3) I launch a new window and execute the same code sequence for the new window. In the popup window the navigate2 doesnt work, and when I look at this new window in spy++ I just have Shell Embedding. I get the same problem even if I instantiate the popup window on startup, so its not related to NewWindow3 at all - it seems the second instance of the web control isnt instantiating even though all the calls return S_OK. This sequence worked fine under IE7 but now I am using IE8 and the popup window isnt working. There is clearly something I am missing but I cant guess what it may be. Any suggestions would be incredibly helpful.

    Read the article

  • FileNotFoundException when running WiX CustomAction with COMAdmin interop

    - by dabcabc
    I am trying to create a WiX custom action which will allow me to shutdown and clear down a COM+ package as part of an upgrade installation, or create and configure a new COM+ package as part of the initial installation. I previously had this running as a CustomAction within a standard Visual Studio MSI but this only allows the custom action to be executed after the files have been copied - which will fail as the package will still be running. The COMAdmin.dll has been added as a reference to the CustomAction project and is set CopyLocal=true. In the bin folder for the custom action project the Interop.COMAdmin.dll is present. The answer to this question seems to suggest that it should work. I am getting the following exception within the MSI log when trying to install: MSI (s) (C4:04) [10:40:34:205]: Invoking remote custom action. DLL: C:\WINDOWS\Installer\MSI119.tmp, Entrypoint: BeforeInstall SFXCA: Extracting custom action to temporary directory: C:\WINDOWS\Installer\MSI119.tmp-\ SFXCA: Binding to CLR version v2.0.50727 Calling custom action MyCustomAction!MyCustomAction.CustomActions.BeforeInstall Exception thrown by custom action: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.IO.FileNotFoundException: Could not load file or assembly 'Interop.COMAdmin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. File name: 'Interop.COMAdmin, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' at MyCustomAction.CustomActions.BeforeInstall(Session session) WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value . --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.RuntimeMethodHandle.InvokeMethodFast(Object target, Object arguments, Signature sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object parameters, CultureInfo culture) at Microsoft.Deployment.WindowsInstaller.CustomActionProxy.InvokeCustomAction(Int32 sessionHandle, String entryPoint, IntPtr remotingDelegatePtr)

    Read the article

  • Sharing an assembly between ASP.NET and Silverlight

    - by vtortola
    Hi, I've created an assembly to share it between my main app and the silverlight app. At the beginning it looked like it was going to work but now I get this exception: "System.IO.FileNotFoundException was caught, Message="Could not load file or assembly 'System.Xml.Linq". I'm using .NET 3.5 Sp1 and Silverlight 3. That shared assembly uses System.Xml.Linq, and it cannot find it... I think because it is trying to find that version in the .NET framework instead looking in the silverlight one. How can I fix this? Cheers. PS: this is the full exception output: System.IO.FileNotFoundException was caught Message="Could not load file or assembly 'System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified." Source="MyApp.Metadata" FileName="System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" FusionLog="=== Pre-bind state information ===\r\nLOG: User = IIS APPPOOL\DefaultAppPool\r\nLOG: DisplayName = System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\n (Fully-specified)\r\nLOG: Appbase = file:///C:/Users/vtortola.MyApp/Documents/MyApp/MyAppSAS/WebApplication1/WebApplication1/\r\nLOG: Initial PrivatePath = C:\Users\vtortola.MyApp\Documents\MyApp\MyAppSAS\WebApplication1\WebApplication1\bin\r\nCalling assembly : MyApp.Metadata, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null.\r\n===\r\nLOG: This bind starts in default load context.\r\nLOG: Using application configuration file: C:\Users\vtortola.MyApp\Documents\MyApp\MyAppSAS\WebApplication1\WebApplication1\web.config\r\nLOG: Using host configuration file: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Aspnet.config\r\nLOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework64\v2.0.50727\config\machine.config.\r\nLOG: Post-policy reference: System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35\r\nLOG: The same bind was seen before, and was failed with hr = 0x80070002.\r\n" StackTrace: at MyApp.Metadata.MyAppEntity.Deserialize(String message)

    Read the article

  • Linq-to-Sql IIS7 Login failed for user ‘DOMAIN\MACHINENAME$’

    - by cfdev9
    I am encountering unexpected behaviour using Linq-to-sql DataContext. When I run my application locally it works as expected however after deploying to a test server which runs IIS7, I get an error Login failed for user ‘DOMAIN\MACHINENAME$’ when attempting to open objects from the DataContext. This code explains the error, which breaks on the very last line with the error "System.Data.SqlClient.SqlException: Login failed for user". var connStr ="Data Source=server;Initial Catalog=Test;User Id=testuser;Password=password"; //Test 1 var conn1 = new System.Data.SqlClient.SqlConnection(connStr); var cmdString = "SELECT COUNT(*) FROM Table1"; var cmd = new System.Data.SqlClient.SqlCommand(cmdString, conn1); conn1.Open(); var count1 = cmd.ExecuteScalar(); conn1.Close(); //Test 2 var conn2 = new System.Data.SqlClient.SqlConnection(connStr); var context = new TestDataContext(conn2); var count2 = context.Table1s.Count(); The connection string is not even using integrated security, so why is Linq-to-sql trying to connect as a specific user? If I change the server name in the connection string I get a different error so its using atleast part of the connection string, but apparently ignoring the UserId and Password. Very confused.

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • Boosting my GA with Neural Networks and/or Reinforcement Learning

    - by AlexT
    As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze. That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as 1 0 0 1 0 1 0 1 0 1 1 1 0 0... How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome? In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me: (from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm) 1. Set parameter , and environment reward matrix R 2. Initialize matrix Q as zero matrix 3. For each episode: * Select random initial state * Do while not reach goal state o Select one among all possible actions for the current state o Using this possible action, consider to go to the next state o Get maximum Q value of this next state based on all possible actions o Compute o Set the next state as the current state End Do End For The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm? If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >