Search Results

Search found 23890 results on 956 pages for 'issue'.

Page 744/956 | < Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >

  • Changing or accessing a control in a Silverlight Data Form Edit Template

    - by Aim Kai
    I came across an interesting issue today when playing around with the Silverlight Data Form control. I wanted to change the visibility of a particular control inside the bound edit template.. see xaml below. <df:DataForm x:Name="NoteFormEdit" ItemsSource="{Binding Mode=OneWay}" AutoGenerateFields="True" AutoEdit="True" AutoCommit="False" CommitButtonContent="Save" CancelButtonContent="Cancel" CommandButtonsVisibility="Commit" LabelPosition="Top" ScrollViewer.VerticalScrollBarVisibility="Disabled" EditEnded="NoteForm_EditEnded"> <df:DataForm.EditTemplate> <DataTemplate> <StackPanel> <df:DataField> <TextBox Text="{Binding Title, Mode=TwoWay}"/> </df:DataField> <df:DataField> <TextBox Text="{Binding Description, Mode=TwoWay}" AcceptsReturn="True" HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto" Height="" TextWrapping="Wrap" SizeChanged="TextBox_SizeChanged"/> </df:DataField> <df:DataField> <TextBlock Text="{Binding Username}" x:Name="tbUsername"/> </df:DataField> <df:DataField> <TextBlock Text="{Binding DateCreated, Converter={StaticResource DateConverter}}" x:Name="tbDateCreated"/> </df:DataField> </StackPanel> </DataTemplate> </df:DataForm.EditTemplate> </df:DataForm> I wanted to depending on how the container of this data form was accessed to disable or hide the last two data fields. I did a work around which had two data forms but this is a bit excessive! Does anyone know how to access these controls inside the edit template?

    Read the article

  • Listview of items from a object selected in another listview

    - by Ingó Vals
    Ok the title maybe a little confusing. I have a database with the table Companies wich has one-to-many relotionship with another table Divisions ( so each company can have many divisions ) and division will have many employees. I have a ListView of the companies. What I wan't is that when I choose a company from the ListView another ListView of divisions within that company appears below it. Then I pick a division and another listview of employees within that division appaers below that. You get the picture. Is there anyway to do this mostly inside the XAML code declaritively (sp?). I'm using linq so the Company entity objects have a property named Division wich if I understand linq correctly should include Division objects of the divisions connected to the company. So after getting all the companies and putting them as a itemsource to CompanyListView this is where I currently am. <ListView x:Name="CompanyListView" DisplayMemberPath="CompanyName" Grid.Row="0" Grid.Column="0" /> <ListView DataContext="{Binding ElementName=CompanyListView, Path=SelectedItem}" DisplayMemberPath="Division.DivisionName" Grid.Row="1" Grid.Column="0" /> I know I'm way off but I was hoping by putting something specific in the DataContext and DisplayMemberPath I could get this to work. If not then I have to capture the Id of the company I guess and capture a select event or something. Another issue but related is the in the seconde column besides the lisview I wan't to have a details/edit view for the selected item. So when only a company is selected details about that will appear then when a division under the company is picked It will go there instead, any ideas?

    Read the article

  • Undefined control sequence

    - by Jelle Fresen
    Hi, I am making my Master's Thesis with LaTeX, but I can't get the provided style to work. Specifically, I get the error 'Undefined control sequence' when using the function makeformaltitlepages, which is defined in mscthesis.sty. On the internet, the only answer I could find is the straightforward 'you probably made a typo', or 'you probably forgot to include the package', but I have reason to believe neither of those apply to me. I am quite sure that the function exists, for when I add a little verification, using the @ifundefined command, the logfile shows that the function actually does exist. And, as can be seen in the following piece of code, I also include the package: \usepackage{mscthesis} % setup information like author, company, title, etc. \begin{document} \formatmatter \thispagestyle{empty} \maketitle \makeatletter \@ifundefined{makeformaltitlepages}{\message{Function is not defined.}}{\message{Function is defined.}} \makeatother \makeformaltitlepages{\input{abstract}} % add chapters, sections, etc. and end the document Now, the output shows the line "Function is defined." just before the output of maketitle (which I think is rather strange on its own, but that might be a flushing issue), followed by the following infinitely repeated error (well, cut off after 100 times by LaTeX): Function is defined. // some gibberish about font info ! Undefined control sequence. \GenericError ... #4 \errhelp \@err@ ... l.112 \makeformaltitlepages{} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., `\hobx'), type `I' and the correct spelling (e.g., `I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. While the error keeps repeating, the line that starts with '#4' cycles between the following four lines: #4 \errhelp \@err@ ... \let \@err@ ... \@empty \def \MessageBreak... \endgroup Ok, so, do any of you have a suggestion of how I might continue to hunt this bug? Or what blatantly obvious mistake did I make?

    Read the article

  • Calling WCF service with parameter using jQuery

    - by Remi Despres-Smyth
    I'm trying to call a WCF web service hosted by IIS using jQuery. I can call it fine without any parameters, and I can also call it fine with a GET request that includes my parameter, but as soon as I try to send in the request via POST, the call is failing. The web service is currently nothing but: [OperationContract, WebInvoke] public ValidationResultSummary TestValidateOn( object day) { return null; } I've set the parameter to object, to make sure the issue isn't something with type coercion. With a breakpoint in the web service, I know the call without parameters as well as the GET call with param succeeds; in the latter the expected value is sent up. Calling code looks like: $.ajax({ // type: "GET", // url: "../Shared/Services/DomainServices.svc/TestValidateOn?day='12/Jan/2010'", type: "POST", url: "../Shared/Services/DomainServices.svc/TestValidateOn", // data: "{ }", --This works if object type param, calls with null data: "{'day': " + selectedDate + "}", // This fails miserably // data: "{'day': '" + selectedDate + "'}", --This also fails miserably contentType: "application/json; charset=utf-8", dataType: "json", success: function(data) { displayResults(data.d); }, error: function(xmlHttpReq, status, errThrown) { displayError(xmlHttpReq, status, errThrown); } }); The POST call never reaches my breakpoint, and on the client, error 500 - "Internal Server Error" - is returned. Any help would be appreciated.

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • How do I pass a custom field to a hook (Invision Power Board [ipb] / PHP)

    - by Julian Young
    A long shot but here's hoping someone has some experience coding PHP hooks for Invisions Power Board forum. I'm attempting to code a status addition and the PHP works fine on it's own, it's the passing of the IPB's reference to my hook that is the issue. I.E. You setup a custom field in your forum for MSN Username, then from within a skin / template hook you pass the custom field to the hook and then use your PHP code to check on the status. Here is the IPB skin code I am hooking into on Global-userInfoPane... <if test="authorcfields:|:$author['custom_fields'] != """> <foreach loop="customFieldsOuter:$author['custom_fields'] as $group => $data"> <foreach loop="customFields:$author['custom_fields'][ $group ] as $field"> <if test="$field != ''"> <li> {$field} </li> </if> </foreach> </foreach> </if> Although I could easily add my own skin hook here. i.e. <if test="myHookHere:|:1===1"></if> Literally all I need is a single custom field entry from here passed to my hook. If I query every member when the hook is run then that will result in many extra sql queries per page view. All I want to do is pass that specific custom field to the hook... i.e. myHookHere( $customfield['msn_username'] ) Is this possible? How do you reference the customfield? Can I execute pure PHP from here? Appreciate anyone that can help! I tried the official invision forums but not had much luck.

    Read the article

  • How to bind a datatable to a wpf editable combobox: selectedItem showing System.Data.DataRowView

    - by black sensei
    Hello Good People!! I bound a datatable to a combobox and defined a dataTemplate in the itemTemplate.i can see desired values in the combobox dropdown list,what i see in the selectedItem is System.Data.DataRowView here are my codes: <ComboBox Margin="128,139,123,0" Name="cmbEmail" Height="23" VerticalAlignment="Top" TabIndex="1" ToolTip="enter the email you signed up with here" IsEditable="True" IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding}"> <ComboBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=username}"/> </StackPanel> </DataTemplate> </ComboBox.ItemTemplate> The code behind is so : if (con != null) { con.Open(); //users table has columns id | username | pass SQLiteCommand cmd = new SQLiteCommand("select * from users", con); SQLiteDataAdapter da = new SQLiteDataAdapter(cmd); userdt = new DataTable("users"); da.Fill(userdt); cmbEmail.DataContext = userdt; } I've been looking for something like SelectedValueTemplate or SelectedItemTemplate to do the same kind of data templating but i found none. I'll like to ask if i'm doing something wrong or it's a known issue for combobox binding? if something is wrong in my code please point me to the right direction. thanks for reading this

    Read the article

  • Negative ItemCount in SharePoint Document Library

    - by ccomet
    What can be done about negative numbers in library item counts? ItemCount is a read-only property, what are you supposed to do when it is drastically incorrect? Earlier last week, I was doing some testing involving the copying and moving of files and folders from one document library to another. I was transfering the items from our actual document library to a sandbox "Test" library that I used to run all sorts of object model and workflow testing in before migrating to the public lists and libraries. I noticed that with files, things worked correctly, but when I copied a folder that had a file inside it (using SPFolder.CopyTo()), the item count for the test library did not actually update. Since this testing was mostly playing around, I paid it little heed. Today I was back in the test library to test a different workflow (regarding PDF conversion). While I was there, I decided to delete the folder I left last week since I didn't need it anymore. And that's when I saw the item count for the list drop to -1 in the All Site Content View. When I deleted the new PDF I had just uploaded, it then dropped to -2! I even checked with the object model... getting an instance of the library I checked the ItemCount property... lo and behold it was also -2. Is there any process which runs in the background, kinda like the one that cleans up workflow history, which will correct this kind of issue? Or is a programmer expected to keep watch for this kind of situation and come up with calculations to compensate the "count penalty", as it were?

    Read the article

  • JQGrid datatype as Ajax function not getting called

    - by mraman
    Hi, JQGrid datatype as Ajax function not getting called. once i tried to debug using firebug, found out that those lines are not exectuced. please let me know the issue with my code. Thanks in advance. jQuery("#list").jqGrid({ //url:'example.xml', datatype: function() { $.ajax({ url: "example.xml", data: "{}", dataType: "xml", mtype: "GET", complete: function(jsondata, stat) { alert((jsondata.responseText)); if (stat == "success") { alert("ew"); } }, error : function () {alert("error")} }); }, colNames:['QueueName','SLA Associated', 'SLA met', 'SLA Breached', 'SLA MET %', 'SLA Breached %'], colModel :[ {name:'QueueName',index:'QueueName', width:150}, {name:'SLAAssociated',index:'SLAAssociated', width:150}, {name:'SLAmet',index:'SLAmet', width:150}, {name:'SLABreached',index:'SLABreached', width:150}, {name:'SLAMETPer',index:'SLAMETPer', width:150}, {name:'SLABreachedPer',index:'SLABreachedPer', width:150} ], pager: jQuery('#pager1'), rowNum:1, rowList:[5,10], imgpath: 'themes/basic/images' }); in Header i add as follows <html xmlns="http://www.w3.org/1999/xhtml"> <link rel="stylesheet" type="text/css" media="screen" href="themes/basic/grid.css" /> <link rel="stylesheet" type="text/css" media="screen" href="themes/jqModal.css" /> <link rel="stylesheet" type="text/css" media="screen" href="css/report.css" /> <script src="jquery.js" type="text/javascript"></script> <script src="jquery.jqGrid.js" type="text/javascript"></script> <script src="js/jqModal.js" type="text/javascript"></script> <script src="js/jqDnR.js" type="text/javascript"></script>

    Read the article

  • How to add custom SOAP-Header element to the generated WSDL in Spring-WS

    - by Petr Macek
    Hi, we are migrating from WebLogic web-services to Spring-WS (1.5.X). There is currently one issue we are facing: We need to pass a context object (on WLS it is passed as SOAP-Header element) to other services that are still running on WLS from the Spring-WS powered service. The header element is still formulated on client side and the newly created WS (Spring-WS) should just pass it to other services. I can imagine how the custom element would be passed: override the doWithMessage(WebServiceMessage message) method... Is there a way to generate the wsdl with the help of DefaultWsdl11Definition to contain that custom header element? See the example: <wsdl:operation name="GetSomeInformation"> <soap:operation soapAction="http://www.dummyservice.com/InformationService/GetSomeInformation" /> <wsdl:input> <soap:body use="literal" /> <soap:header message="ctx:ServiceContextMessage" part="serviceContext" use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> <wsdl:fault name="Error"> <soap:fault name="Error" use="literal" /> </wsdl:fault> </wsdl:operation> Thanks for help

    Read the article

  • Inaccurate performance counter timer values in Windows Performance Monitor

    - by krisg
    I am implementing instrumentation within an application and have encountered an issue where the value that is displayed in Windows Performance Monitor from a PerformanceCounter is incongruent with the value that is recorded. I am using a Stopwatch to record the duration of a method execution, then first i record the total milliseconds as a double, and secondly i pass the Stopwatch's TimeSpan.Ticks to the PerformanceCounter to be recorded in the Performance Monitor. Creating the Performance Counters in perfmon: var datas = new CounterCreationDataCollection(); datas.Add(new CounterCreationData { CounterName = name, CounterType = PerformanceCounterType.AverageTimer32 }); datas.Add(new CounterCreationData { CounterName = namebase, CounterType = PerformanceCounterType.AverageBase }); PerformanceCounterCategory.Create("Category", "performance data", PerformanceCounterCategoryType.SingleInstance, datas); Then to record i retrieve a pre-initialized counter from a collection and increment: _counters[counter].IncrementBy(timing); _counters[counterbase].Increment(); ...where "timing" is the Stopwatch's TimeSpan.Ticks value. When this runs, the collection of double's, which are the milliseconds values for the Stopwatch's TimeSpan show one set of values, but what appears in PerfMon are a different set of values. For example... two values recorded in the List of milliseconds are: 23322.675, 14230.614 And what appears in PerfMon graph are: 15.546, 9.930 Can someone explain this please?

    Read the article

  • SSRS 2005 - Cascading parameters and default value update problem

    - by sHr0oMaN
    I have a report with cascading parameters. The first parameter is Finanical Period Type, being either Month or Week. The second parameter is a list of either financial months or weeks depending on what was selected for the first parameter. This all works well and selecting a series of different Financial Period Types in sequence correctly updates the second parameter's values. However I now wish to add a default value for the second parameter, which is once again dependent on the first parameter. So I've added an additional field to the dataset populating the second parameter called DefaultPeriod and set the second parameter's default value to be retrieved from the above field. The first time I select the Financial Period Type, the default is correctly set. However changing the Financial Period Type results in an updated list for the second parameter, but the default is incorrect. It remains set to the original default value, even thought the dataset has been refresh and the DefaultPeriod field is correct. This is both an issue in the IDE and on the Report Manager site.

    Read the article

  • Mimicking the UltraGridColumnChooser's drag & drop ability

    - by Sören Kuklau
    (Infragistics 2008 Vol. 3, CLR 2.0) Infragistics's UltraGrid comes with a column chooser user control, which is simply a vertical arrangement of columns with checkboxes that toggle a column's hidden state. In addition, it allows you to pick a column and drag it directly to the grid so you don't have to manually position it afterwards. (This is particularly handy when you already have a lot of visible columns and have no clue where the new one ended up.) I'm building my own column chooser based on an UltraTree. Getting the checkboxes to behave the same wasn't an issue, but I haven't found a way to drag a column from the tree to the grid and have it accept it. In my tree, each UltraTreeNode has a Tag with the following struct: Private Structure DraggableGridColumn Public NodeKey As String Public NodeName As String Public ParentKey As String Public Column As UltraGridColumn End Structure I then have an event as follows: Private Sub columnsTree_SelectionDragStart(ByVal sender As Object, ByVal e As System.EventArgs) Handles columnsTree.SelectionDragStart If columnsTree.SelectedNodes.Count <> 1 Then Return End If If Not TypeOf columnsTree.SelectedNodes(0).Tag Is DraggableGridColumn Then Return End If Dim column As UltraGridColumn = CType(columnsTree.SelectedNodes(0).Tag, DraggableGridColumn).Column columnsTree.DoDragDrop(column, DragDropEffects.All) End Sub In the DoDragDrop call, neither column (of type UltraGridColumn) nor column.Header (of type ColumnHeader) get accepted by the grid. I assume I'm sending the wrong type, and/or that the grid expects a special struct with some additional information. Unfortunately, I've also failed to catch an event (both on the column chooser side as well as on the grid side) where Infragistics's normal column chooser does this properly; the normal drag & drop events never seem to fire.

    Read the article

  • Testing movie with Flash IDE fails to load file from localhost

    - by davgothic
    Hi, I'm just wondering if anybody can help me with my simple but frustrating problem. I have created an SWF that loads an XML file from http://localhost/flash/Projects/MEL/Quiz/Quiz/bin/xml/quiz.xml, but I get this error when running the movie using Test Movie in the Flash IDE. Error #2044: Unhandled ioError:. text=Error #2032: Stream Error. URL: http://localhost/flash/Projects/MEL/Quiz/Quiz/bin/xml/quiz.xml at Main/loadConfig()[D:\www\webroot\flash\Projects\MEL\Quiz\Quiz\src\Main.as:126] at Main/configLoadError()[D:\www\webroot\flash\Projects\MEL\Quiz\Quiz\src\Main.as:143] at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/onComplete() The error I get if I handle the exception is: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032: Stream Error. URL: http://localhost/flash/Projects/MEL/Quiz/Quiz/bin/xml/quiz.xml"] Trouble is running the SWF in a browser locally does work, it only throws these errors in the Flash IDE. I have tried a adding wildcard crossdomain.xml file in my root web directory and setting the SWF publish properties for local playback security to Allow network only, but neither of these have solved my problem. I know Windows 7 handles localhost name resolution differently compared to previous versions of Windows but I have even added 127.0.0.1 localhost to my hosts file to no avail. Can anyone shed any light on this issue?

    Read the article

  • Assert.AreEqual() Exception in VS2010

    - by Tom Miller
    I am fairly new to unit testing and am using VS2010 to develop in and run my tests. I have a simple test, illustrated below, that simply compares 2 System.Data.DataTableReader objects. I know that they are equal as they are both created using the same object types, the same input file and I have verified that the objects "look" the same. I realize I may be dealing with a couple of issues, one being whether or not this is the proper use of Assert.AreEqual or even the proper way to test this scenario, and the other being the main issue I am dealing with which is why this test fails with this exception: Failed 00:00:00.1000660 0 Assert.AreEqual failed. Expected:<System.Data.DataTableReader>. Actual:<System.Data.DataTableReader>. Here is the unit test code that is failing: public void EntriesTest() { AuditLog target = new AuditLog(); target.Init(); DataSet ds = new DataSet(); ds.ReadXml(TestContext.DataRow["AuditLogPath"].ToString()); DataTableReader expected = ds.Tables[0].CreateDataReader(); DataTableReader actual = target.Entries.Tables[0].CreateDataReader(); Assert.AreEqual<DataTableReader>(expected, actual); } Any help would be greatly appreciated!

    Read the article

  • Parse Exception: At line 1, column 0: no element found

    - by Jeffrey
    Hi everyone, I have a weird issue. I receive the following error that causes a force-close: org.apache.harmony.xml.ExpatParser$ParseException: At line 1, column 0: no element found at org.apache.harmony.xml.ExpatParser.parseFragment(ExpatParser.java:508) at org.apache.harmony.xml.ExpatParser.parseDocument(ExpatParser.java:467) at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:329) at org.apache.harmony.xml.ExpatReader.parse(ExpatReader.java:286) After clicking the Force Close button, the Activity is recreated and the parsing completes without a hitch. I'm using the following code snippet inside doInBackground of an AsyncTask: URL serverAddress = new URL(url[0]); HTTPURLConnection connection = (HttpURLConnection) serverAddress.openConnection(); connection.setRequestMethod("GET"); connection.setDoOutput(true); connection.setReadTimeout(10000); connection.connect(); InputStream stream = connection.getInputStream(); SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); XMLReader xr = sp.getXMLReader(); xr.parse(new InputSource(stream)); // The line that throws the exception Why would the Activity force-close and then run without any problems immediately after? Would a BufferedInputStream be any different? I'm baffled. :( Thanks for your time everyone.

    Read the article

  • Binding TabControl ItemsSource to an ObservableCollection of ViewModels causes content to refresh on

    - by Brent
    I'm creating an WPF application using the MVVM framework, and I've adopted several features from Josh Smith's article on MVVM here... Most importantly, I'm binding a TabControl to an ObservableCollection of ViewModels. This means that am using a tabbed MDI interface that displays a UserControl as the content of a TabItem. The issue I'm seeing in my application is that when I have several tabs and I flip back and forth between tabs, the content is being refersh each time I change tabs. If you download Josh Smith's source code, you'll see that his app has the same problem. For example, click on the "View All Customers" button and scroll down to the bottom the ListView. Next click on the "Create New Customer" button. When you switch back to the All Customer view you'll notice that the ListView scrolls back to the top. If you switch back to the New Customer tab and place your cursor in one of the TextBoxes, then switch to All Customers tab and back, you'll notice that the cursor is now gone. I imagine that this is because I'm using an ObservableCollection, but I can't be sure. Is there any way to prevent the tab's content from refreshing when it receives the focus? EDIT: I found my problem when I ran the profiler on my application. I'm defining a DataTemplate for my ViewModels so it knows how to render the ViewModel when it is displayed in the tab... like so: <DataTemplate DataType="{x:Type vm:CustomerViewModel}"> <vw:CustomerView/> </DataTemplate> So whenever I switch to a different tab, it has to re-create the ViewModel again. I fixed it temporarily by changing my ObservableCollection of ViewModels to an ObservableCollection of UserControls. However, I would really still like to use DataTemplates if possible. Is there a way to make a DataTemplate work?

    Read the article

  • SSL certificate on IIS 7

    - by comii
    I am trying to install a SSL certificate on IIS 7. I have download a free trial certificate. After that, this is the steps what I do: Click the Start menu and select Administrative Tools. Start Internet Services Manager and click the Server Name. In the center section, double click on the Server Certificates button in the Security section. From the Actions menu click Complete Certificate Request. Enter the location for the certificate file. Enter a Friendly name. Click OK. Under Sites select the site to be secured with the SSL certificate. From the Actions menu, click Bindings.This will open the Site Bindings window. In the Site Bindings window, click Add. This opens the Add Site Binding window. Select https from the Type menu. Set the port to 443. Select the SSL Certificate you just installed from the SSL Certificate menu. Click OK. This is the step where I get the message: One or more intermediate certificates in the certificate chain are missing. To resolve this issue, make sure that all of intermediate certificates are installed. For more information, see http://support.microsoft.com/kb/954755 After this, when I access the web site on its first page, I get this message: There is a problem with this website's security certificate. What am I doing wrong?

    Read the article

  • Problems with MembershipUser / System.Web.ApplicationServices when upgrading to .net 4

    - by DaveK
    I have a large vb.net web project that I am trying to updgrade to .net4/VS2010. During compile I get the following error: 'System.Web.Security.MembershipUser' in assembly 'System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' has been forwarded to assembly 'System.Web.ApplicationServices'. Either a reference to 'System.Web.ApplicationServices' is missing from your project or the type 'System.Web.Security.MembershipUser' is missing from assembly 'System.Web.ApplicationServices'. I researched the issue and the error is accurate. I added a reference to System.Web.ApplicationServices but I am still having problems. The project does not seem to recognize that the reference has been added. Intellisense will not pick it up, I can not use it in an Import statement, etc ... The assembly is listed in the compile section of my web.config: <assemblies> ... <add assembly="System.Web.ApplicationServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> Any ideas?

    Read the article

  • Add PRISM Region Manager In Existing Navigation Window

    - by Nate Noonen
    We have a "legacy" WPF applicaton that is based on a NavigationWindow. The NavigationWindow has a fairly large ControlTemplate that houses a ContentPresenter as so: <ControlTemplate> ....snip... <ContentPresenter x:Name="PART_NavWinCP" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"/> .....snip.... </ControlTemplate> What we want to do is use that ContentPresenter as the first tab and dynamically add other tabs at run time. Like this: <ControlTemplate> ....snip... <TabControl Background="Transparent" cal:RegionManager.RegionName="MainRegion" Grid.ColumnSpan="2" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"> <TabItem Header="Nav Window Content"> <ContentPresenter x:Name="PART_NavWinCP" VerticalAlignment="Stretch" HorizontalAlignment="Stretch"/> </TabItem> </TabControl> .....snip.... </ControlTemplate> Then our Modules grab the RegionName and insert their content dynamically. The issue seems to be that the PRISM region manager doesn't like that our code is in a ContentTemplate and cannot resolve the region. I have tried updating the RegionManager, adding the Region dynamically, just having a root tab control without the ContentPresenter, but I cannot get this to work. Any ideas?

    Read the article

  • Algorithm design, "randomising" timetable schedule in Python although open to other languages.

    - by S1syphus
    Before I start I should add I am a musician and not a native programmer, this was undertook to make my life easier. Here is the situation, at work I'm given a new csv file each which contains a list of sound files, their length, and the minimum total amount of time they must be played. I create a playlist of exactly 60 minutes, from this excel file. Each sample played the by the minimum number of instances, but spread out from each other; so there will never be a period where for where one sound is played twice in a row or in close proximity to itself. Secondly, if the minimum instances of each song has been used, and there is still time with in the 60 min, it needs to fill the remaining time using sounds till 60 minutes is reached, while adhering to above. The smallest duration possible is 15 seconds, and then multiples of 15 seconds. Here is what I came up with in python and the problems I'm having with it, and as one user said its buggy due to the random library used in it. So I'm guessing a total rethink is on the table, here is where I need your help. Whats is the best way to solve the issue, I have had a brief look at things like knapsack and bin packing algorithms, while both are relevant neither are appropriate and maybe a bit beyond me.

    Read the article

  • Visual Studio 2010 Crashes when Creating or Editing a Report (.rdlc) with the Report Designer

    - by ondesertverge
    This is an issue I had with VS 2010 RC and was hoping would be solved with the first official release. Sadly it wasn't. What I have is a number of reports originally created with VS 2008. When opening any of these for editing in VS 2010's Report Designer VS hangs for about two minutes and then shuts down. Same happens when creating a new report using the wizard. Only difference is that a dialog opens up showing a "Loading ..." message then hangs for about the same amount of time and crashes. Running devenv /log gives nothing of value. The Windows Application Event Viewer shows only this: Faulting application name: devenv.exe, version: 10.0.30319.1, time stamp: 0x4ba1fab3 Faulting module name: clr.dll, version: 4.0.30319.1, time stamp: 0x4ba1d9ef Exception code: 0xc00000fd Fault offset: 0x00001919 Faulting process id: 0xc38 Faulting And this: .NET Runtime version 2.0.50727.4927 - Fatal Execution Engine Error (6F551CF2) (0) Has anyone else experienced this and found a solution? OR -- Is there a better tool for rapidly creating decent reports within a WinForms app? Help would be greatly appreciated!

    Read the article

  • XNA vs SlimDX for offscreen renderer

    - by Groky
    Hello, I realise there are numerous questions on here asking about choosing between XNA and SlimDX, but these all relate to game programming. A little background: I have an application that renders scenes from XML descriptions. Currently I am using WPF 3D and this mostly works, except that WPF has no way to render scenes offscreen (i.e. on a server, without displaying them in a window), and also rendering to a bitmap causes WPF to fallback to software rendering. So I'm faced with having to write my own renderer. Here are the requirements: Mix of 3D and 2D elements. Relatively few elements per scene (tens of meshes, tens of 2D elements). Large scenes (up to 3000px square for print). Only a single frame will be rendered (i.e. FPS is not an issue). Opacity masks. Pixel shaders. Software fallback (servers may or may not have a decent gfx card). Possibility of being rendered offscreen. As you can see it's pretty simple stuff and WPF can manage it quite nicely except for the not-being-able-to-export-the-scene problem. In particular I don't need many of the things usually needed in game development. So bearing that in mind, would you choose XNA or SlimDX? The non-rendering portion of the code is already written in C#, so want to stick with that.

    Read the article

  • Selectively suppress XML Code Comments in C#?

    - by Mike Post
    We deliver a number of assemblies to external customers, but not all of the public APIs are officially supported. For example, due to less than optimal design choices sometimes a type must be publicly exposed from an assembly for the rest of our code to work, but we don't want customers to use that type. One part of communicating the lack of support is not provide any intellisense in the form of XML comments. Is there a way to selectively suppress XML comments? I'm looking for something other than ignoring warning 1591 since it's a long term maintenance issue. Example: I have an assembly with public classes A and B. A is officially supported and should have XML documentation. B is not intended for external use and should not be documented. I could turn on XML documentation and then suppress warning 1591. But when I later add the officially supported class C, I want the compiler to tell me that I've screwed up and failed to add the XML documentation. This wouldn't occur if I had suppressed 1591 at the project level. I suppose I could #pragma across entire classes, but it seems like there should be a better way to do this.

    Read the article

  • CSS window height problem with dynamic loaded css

    - by Michael Mao
    Hi all: Please go here and use username "admin" and password "endlesscomic" (without wrapper quotes) to see the web app demo. Basically what I am trying to do is to incrementally integrate my work to this web app, say, every nightly, for the client to check the progress. Also, he would like to see, at the very beginning, a mockup about the page layout. I am trying to use the 960 grid system to achieve this. So far, so good. Except one issue that when the "mockup.css" is loaded dynamically by jQuery, it "extends" the window to the bottom, something I do not wanna have... As an inexperienced web developer, I don't know which part is wrong. Below is my js: /* master.js */ $(document).ready(function() { $('#addDebugCss').click(function() { alertMessage('adding debug css...'); addCssToHead('./css/debug.css'); $('.grid-insider').css('opacity','0.5');//reset mockup background transparcy }); $('#addMockupCss').click(function() { alertMessage('adding mockup css...'); addCssToHead('./css/mockup.css'); $('.grid-insider').css('opacity','1');//set semi-background transparcy for mockup }); $('#resetCss').click(function() { alertMessage('rolling back to normal'); rollbackCss(new Array("./css/mockup.css", "./css/debug.css")); }); }); function alertMessage(msg) //TODO find a better modal prompt { alert(msg); } function addCssToHead(path_to_css) { $('<link rel="stylesheet" type="text/css" href="' + path_to_css + '" />').appendTo("head"); } function rollbackCss(set) { for(var i in set) { $('link[href="'+ set[i]+ '"]').remove(); } } Something should be added to the exteral mockup.css? Or something to change in my master.js? Thanks for any hints/suggestions in advance.

    Read the article

< Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >