Search Results

Search found 5366 results on 215 pages for 'fully qualified naming'.

Page 203/215 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • WCF service using duplex channel in different domains

    - by ds1
    I have a WCF service and a Windows client. They communicate via a Duplex WCF channel which when I run from within a single network domain runs fine, but when I put the server on a separate network domain I get the following message in the WCF server trace... The message with to 'net.tcp://abc:8731/ActiveAreaService/mex/mex' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. So, it looks like the communication just work in one direction (from client to server) if the components are in two separate domains. The Network domains are fully trusted, so I'm a little confused as to what else could cause this? Server app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="JobController.ActiveAreaBehavior"> <serviceMetadata httpGetEnabled="false" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="JobController.ActiveAreaBehavior" name="JobController.ActiveAreaServer"> <endpoint address="mex" binding="mexTcpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="net.tcp://SERVER:8731/ActiveAreaService/" /> </baseAddresses> </host> </service> </services> </system.serviceModel> </configuration> but I also add an end point programmatically in Visual C++ host = gcnew ServiceHost(ActiveAreaServer::typeid); NetTcpBinding^ binding = gcnew NetTcpBinding(); binding->MaxBufferSize = Int32::MaxValue; binding->MaxReceivedMessageSize = Int32::MaxValue; binding->ReceiveTimeout = TimeSpan::MaxValue; binding->Security->Mode = SecurityMode::Transport; binding->Security->Transport->ClientCredentialType = TcpClientCredentialType::Windows; ServiceEndpoint^ ep = host->AddServiceEndpoint(IActiveAreaServer::typeid, binding, String::Empty); // Use the base address Client app.config <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IActiveAreaServer" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://SERVER:8731/ActiveAreaService/" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IActiveAreaServer" contract="ActiveArea.IActiveAreaServer" name="NetTcpBinding_IActiveAreaServer"> <identity> <userPrincipalName value="[email protected]" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> Any help is appreciated! Cheers

    Read the article

  • Pagination number elements do not work - jquery

    - by ClarkSKent
    Hello, I am trying to get my pagination links to work. It seems when I click any of the pagination number links to go the next page, the new content does not load. literally nothing happens and when looking at the console in Firebug, nothing is sent or loaded. I have on the main page 3 links to filter the content and display it. When any of these links are clicked the results are loaded and displayed along with the associated pagination numbers for that specific content. Here is the main page so you can see what the structure looks like for the jquery: <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php generate_pagination($sql) ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> The jquery below is pretty simple and I don't think I need to explain what it does. I believe the problem may be with the $("#pagination li").click(function(){ since the li elements are the numbers that do not work when clicked. Even when I try to fadeOut or hide the content on click nothing happens. I'm pretty new to the jquery structure so I do not fully understand where the real problem is occurring, this is just from my observation. The jquery file looks like this: $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = this.id; $("#content").load("pagination_data.php?page=" + pageNum, function(){ $(this).attr('data-page', pageNum); Hide_Load(); }); }); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); }); If anyone could help me on this that would be great, Thanks. EDIT: Here are the innards of <ul id="pagination">: <?php function generate_pagination($sql) { include_once('config.php'); $per_page = 3; //Calculating no of pages $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="'.$i.'">'.$i.'</li>'; } } $ids=$_GET['ids']; generate_pagination("SELECT COUNT(*) FROM explore WHERE category='$ids'"); ?>

    Read the article

  • which control in vs08 aspx c# will be able to take html tags as input and display it formatted accor

    - by user287745
    i asked a few questions regarding this and got answers that indicate i will have to make an costum build editor using ,markdown, .... to achieve something like stackoverflow.com/questions/ask page. there is time limitation and the requirements are that much, i have to achieve 1) allow users to using html tags when giving input 2) save that complete input to sql db 3) display the data in db withing a contol which renders the formatting as the tags specify. i am aware label and literal controls support html tags, the problem is how to allow the user to input the textbox does not seem the support html tags? thank you Need help in implementing the affect of the WRITE A NEW POST On stackoverflow.com <%--The editor--% <asp:UpdatePanel ID="UpdatePanel7" runat="server"> <ContentTemplate> <asp:Label ID="Label4" runat="server" Text="Label"></asp:Label> <div id="textchanging" onkeyup="textoftextbox('TextBox3'); return false;"> <asp:TextBox ID="TextBox3" runat="server" CausesValidation="True" ></asp:TextBox> </div> <%-- OnTextChanged="textoftextbox('TextBox3'); return false;" gives too man literals error. --%> <asp:Button ID="Button6" runat="server" Text="Post The Comment, The New, Write on Wall" onclick="Button6_Click" /> <asp:Label ID="Label2" runat="server" Text="Label"></asp:Label> <asp:TextBox ID="TextBox4" runat="server" CausesValidation="True"></asp:TextBox> <asp:Literal ID="Literal2" runat="server" Text="this must change" here comes the div <div id="displayingarea" runat="server">This area will change</div> </ContentTemplate> </asp:UpdatePanel> <%--The editor ends--%> protected void Button6_Click(object sender, EventArgs e) { Label3.Text = displayingarea.InnerHtml; } As you see, I have implemented the effect of typing text in textbox which appears as a reflection in the div WITH FULLY FORMATTED TEXT ACCORDIG TO THE HTML TAGS USED The textbox does not allow html! Well the text box doesnot have too, script just extracts what is typed, within letting the server know. The html of the div tag also changes as typed in textbox. Now, There is a “post” button within an update plane a avoid full post back of page What I needis when the button is clicked on the value withing the div tag “innerhtml” is passed on to the label. Yes I know the chnages are only made on the client side, So when the button click event occurs the server is not aware of the new data within the div tag, Therefore the server assign the original html withing the div to the lab. Need help to overcome this, How is it that what we enter text in the textbox press the button with coding like label1.text=textbox1.text; and it works even in update panel, but the aabove code for extracting innerhtml typed at users end similar to yping in textbox does not work?

    Read the article

  • Code Access Security and Sharepoint WebParts

    - by Gordon Carpenter-Thompson
    I've got a vague handle on how Code Access Security works in Sharepoint. I have developed a custom webpart and setup a CAS policy in my Manifest <CodeAccessSecurity> <PolicyItem> <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for Okana"> <IPermission class="Microsoft.SharePoint.Security.SharePointPermission, Microsoft.SharePoint.Security, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" version="1" ObjectModel="True" Impersonate="True" /> <IPermission class="SecurityPermission" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /> <IPermission class="AspNetHostingPermission" version="1" Level="Medium" /> <IPermission class="DnsPermission" version="1" Unrestricted="true" /> <IPermission class="EventLogPermission" version="1" Unrestricted="true"> <Machine name="localhost" access="Administer" /> </IPermission> <IPermission class="EnvironmentPermission" version="1" Unrestricted="true" /> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true" /> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" Unrestricted="true" /> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true" PathDiscovery="*AllFiles*" /> <IPermission class="IsolatedStorageFilePermission" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807" /> <IPermission class="PrintingPermission" version="1" Level="DefaultPrinting" /> <IPermission class="PerformanceCounterPermission" version="1"> <Machine name="localhost"> <Category name="Enterprise Library Caching Counters" access="Write"/> <Category name="Enterprise Library Cryptography Counters" access="Write"/> <Category name="Enterprise Library Data Counters" access="Write"/> <Category name="Enterprise Library Exception Handling Counters" access="Write"/> <Category name="Enterprise Library Logging Counters" access="Write"/> <Category name="Enterprise Library Security Counters" access="Write"/> </Machine> </IPermission> <IPermission class="ReflectionPermission" version="1" Unrestricted="true"/> <IPermission class="SecurityPermission" version="1" Flags="SerializationFormatter, UnmanagedCode, Infrastructure, Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration, ControlAppDomain,ControlDomainPolicy" /> <IPermission class="SharePointPermission" version="1" ObjectModel="True" /> <IPermission class="SmtpPermission" version="1" Access="Connect" /> <IPermission class="SqlClientPermission" version="1" Unrestricted="true"/> <IPermission class="WebPartPermission" version="1" Connections="True" /> <IPermission class="WebPermission" version="1"> <ConnectAccess> <URI uri="$OriginHost$"/> </ConnectAccess> </IPermission> </PermissionSet> <Assemblies> .... </Assemblies> This is correctly converted into a wss_custom_wss_minimaltrust.config when it is deployed onto the Sharepoint server and mostly works. To get the WebPart working fully, however I find that I need to modify the wss_custom_wss_minimaltrust.config by hand after deployment and set Unrestricted="true" on the permissions set <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for MyApp" Name="mywebparts.wsp-86d8cae1-7db2-4057-8c17-dc551adb17a2-1"> to <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for MyApp" Name="mywebparts.wsp-86d8cae1-7db2-4057-8c17-dc551adb17a2-1" Unrestricted="true"> It's all because I'm loading a User Control from the webpart. I don't believe there is a way to enable that using CAS but am willing to be proven wrong. Is there a way to set something in the manifest so I don't need to make this fix by hand? Thanks

    Read the article

  • how to store data in ram in verilog

    - by anum
    i am having a bit stream of 128 bits @ each posedge of clk,i.e.total 10 bit streams each of length 128 bits. i want to divide the 128 bit stream into 8, 8 bits n hve to store them in a ram / memory of width 8 bits. i did it by assigning 8, 8 bits to wires of size 8 bit.in this way there are 16 wires. and i am using dual port ram...wen i cal module of memory in stimulus.i don know how to give input....as i am hving 16 different wires naming from k1 to k16. **codeeee** // this is stimulus file module final_stim; reg [7:0] in,in_data; reg clk,rst_n,rd,wr,rd_data,wr_data; wire [7:0] out,out_wr, ouut; wire[7:0] d; integer i; //wire[7:0] xor_out; reg kld,f; reg [127:0]key; wire [127:0] key_expand; wire [7:0]out_data; reg [7:0] k; //wire [7:0] k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15,k16; wire [7:0] out_data1; **//key_expand is da output which is giving 10 streams of size 128 bits.** assign k1=key_expand[127:120]; assign k2=key_expand[119:112]; assign k3=key_expand[111:104]; assign k4=key_expand[103:96]; assign k5=key_expand[95:88]; assign k6=key_expand[87:80]; assign k7=key_expand[79:72]; assign k8=key_expand[71:64]; assign k9=key_expand[63:56]; assign k10=key_expand[55:48]; assign k11=key_expand[47:40]; assign k12=key_expand[39:32]; assign k13=key_expand[31:24]; assign k14=key_expand[23:16]; assign k15=key_expand[15:8]; assign k16=key_expand[7:0]; **// then the module of memory is instanciated. //here k1 is sent as input.but i don know how to save the other values of k. //i tried to use for loop but it dint help** memory m1(clk,rst_n,rd, wr,k1,out_data1); aes_sbox b(out,d); initial begin clk=1'b1; rst_n=1'b0; #20 rst_n = 1; //rd=1'b1; wr_data=1'b1; in=8'hd4; #20 //rst_n=1'b1; in=8'h27; rd_data=1'b0; wr_data=1'b1; #20 in=8'h11; rd_data=1'b0; wr_data=1'b1; #20 in=8'hae; rd_data=1'b0; wr_data=1'b1; #20 in=8'he0; rd_data=1'b0; wr_data=1'b1; #20 in=8'hbf; rd_data=1'b0; wr_data=1'b1; #20 in=8'h98; rd_data=1'b0; wr_data=1'b1; #20 in=8'hf1; rd_data=1'b0; wr_data=1'b1; #20 in=8'hb8; rd_data=1'b0; wr_data=1'b1; #20 in=8'hb4; rd_data=1'b0; wr_data=1'b1; #20 in=8'h5d; rd_data=1'b0; wr_data=1'b1; #20 in=8'he5; rd_data=1'b0; wr_data=1'b1; #20 in=8'h1e; rd_data=1'b0; wr_data=1'b1; #20 in=8'h41; rd_data=1'b0; wr_data=1'b1; #20 in=8'h52; rd_data=1'b0; wr_data=1'b1; #20 in=8'h30; rd_data=1'b0; wr_data=1'b1; #20 wr_data=1'b0; #380 rd_data=1'b1; #320 rd_data = 1'b0; /////////////// #10 kld = 1'b1; key=128'h 2b7e151628aed2a6abf7158809cf4f3c; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b0; #10 wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 wr = 1'b0; #20 rd = 1'b1; #4880 f=1'b1; ///////////////////////////////////////////////// // out_data[i] end /*always@(*) begin while(i) mem[i]^mem1[i] ; i<=16; break; end*/ always #10 clk=~clk; always@(posedge clk) begin //$monitor($time," out_wr=%h,out_rd=%h\n ",out_wr,out); #10000 $stop; end endmodule

    Read the article

  • Constructing a Binary Tree from its traversals

    - by user991710
    I'm trying to construct a binary tree (unbalanced), given its traversals. I'm currently doing preorder + inorder but when I figure this out postorder will be no issue at all. I realize there are some question on the topic already but none of them seemed to answer my question. I've got a recursive method that takes the Preorder and the Inorder of a binary tree to reconstruct it, but is for some reason failing to link the root node with the subsequent children. Note: I don't want a solution. I've been trying to figure this out for a few hours now and even jotted down the recursion on paper and everything seems fine... so I must be missing something subtle. Here's the code: public static <T> BinaryNode<T> prePlusIn( T[] pre, T[] in) { if(pre.length != in.length) throw new IllegalArgumentException(); BinaryNode<T> base = new BinaryNode(); base.element = pre[0]; // * Get root from the preorder traversal. int indexOfRoot = 0; if(pre.length == 0 && in.length == 0) return null; if(pre.length == 1 && in.length == 1 && pre[0].equals(in[0])) return base; // * If both arrays are of size 1, element is a leaf. for(int i = 0; i < in.length -1; i++){ if(in[i].equals(base.element)){ // * Get the index of the root indexOfRoot = i; // in the inorder traversal. break; } // * If we cannot, the tree cannot be constructed as the traversals differ. else throw new IllegalArgumentException(); } // * Now, we recursively set the left and right subtrees of // the above "base" root node to whatever the new preorder // and inorder traversals end up constructing. T[] preleft = Arrays.copyOfRange(pre, 1, indexOfRoot + 1); T[] preright = Arrays.copyOfRange(pre, indexOfRoot + 1, pre.length); T[] inleft = Arrays.copyOfRange(in, 0, indexOfRoot); T[] inright = Arrays.copyOfRange(in, indexOfRoot + 1, in.length); base.left = prePlusIn( preleft, inleft); // * Construct left subtree. base.right = prePlusIn( preright, inright); // * Construc right subtree. return base; // * Return fully constructed tree } Basically, I construct additional arrays that house the pre- and inorder traversals of the left and right subtree (this seems terribly inefficient but I could not think of a better way with no helpers methods). Any ideas would be quite appreciated. Side note: While debugging it seems that the root note never receives the connections to the additional nodes (they remain null). From what I can see though, that should not happen... EDIT: To clarify, the method is throwing the IllegalArgumentException @ line 21 (else branch of the for loop, which should only be thrown if the traversals contain different elements.

    Read the article

  • C++, using stack.h read a string, then display it in reverse

    - by user1675108
    For my current assignment, I have to use the following header file, #ifndef STACK_H #define STACK_H template <class T, int n> class STACK { private: T a[n]; int counter; public: void MakeStack() { counter = 0; } bool FullStack() { return (counter == n) ? true : false ; } bool EmptyStack() { return (counter == 0) ? true : false ; } void PushStack(T x) { a[counter] = x; counter++; } T PopStack() { counter--; return a[counter]; } }; #endif To write a program that will take a sentence, store it into the "stack", and then display it in reverse, and I have to allow the user to repeat this process as much as they want. The thing is, I am NOT allowed to use arrays (otherwise I wouldn't need help with this), and am finding myself stumped. To give an idea of what I am attempting, here is my code as of posting, which obviously does not work fully but is simply meant to give an idea of the assignment. #include <iostream> #include <cstring> #include <ctime> #include "STACK.h" using namespace std; int main(void) { auto time_t a; auto STACK<char, 256> s; auto string curStr; auto int i; // Displays the current time and date time(&a); cout << "Today is " << ctime(&a) << endl; s.MakeStack(); cin >> curStr; i = 0; do { s.PushStack(curStr[i]); i++; } while (s.FullStack() == false); do { cout << s.PopStack(); } while (s.EmptyStack() == false); return 0; } // end of "main" **UPDATE This is my code currently #include <iostream> #include <string> #include <ctime> #include "STACK.h" using namespace std; time_t a; STACK<char, 256> s; string curStr; int i; int n; // Displays the current time and date time(&a); cout << "Today is " << ctime(&a) << endl; s.MakeStack(); getline(cin, curStr); i = 0; n = curStr.size(); do { s.PushStack(curStr[i++]); i++; }while(i < n); do { cout << s.PopStack(); }while( !(s.EmptyStack()) ); return 0;

    Read the article

  • Mouseover triggered on absolute positioned div

    - by Tauren
    Objective Have a small magnifying glass icon that appears in the top right corner of a table cell when the table cell is hovered over. Mousing over the magnifying glass icon and clicking it will open a dialog window to show detailed information about the item in that particular table cell. I want to reuse the same icon for hundreds of table cells without recreating it each time. Partial Solution Have a single <span> that is absolutely positioned and hidden. When a _previewable table cell is hovered, the <span> is moved to the correct location and shown. This <span> is also moved in the DOM to be a child of the _previewable table cell. This enables a click handler attached to the <span> to find the _previewable parent, and get information from it's jquery data() object that is used to populate the contents of the dialog. Here is a very simplified version of my HTML: <body> <span id="options"> <a class="ui-state-default ui-corner-all"> <span class="ui-icon ui-icon-search"></span> Preview </a> </span> <table> <tr> <td class="_previewable"> <img scr="user_1.png"/> <span>Bob Smith</span> </td> </tr> </table> </body> And this CSS: #options { position: absolute; display: none; } With this jQuery code: var $options = $('#options'); $options.click(function() { $item = $(this).parents("._previewable"); // Show popup based on data in $item.data("id"); Layout.renderPopup($item.data("id"),$item.data("popup")); }); $('._previewable').live('mouseover mouseout',function(event) { if (event.type == 'mouseover') { var $target = $(this); var $parent = $target.offsetParent()[0]; var left = $parent.scrollLeft + $target.position().left + $target.outerWidth() - $options.outerWidth() + 1; var top = $parent.scrollTop + $target.position().top + 2; $options.appendTo($target); $options.css({ "left": left + "px", "top": top + "px" }).show(); } else { // On mouseout, $options continues to be a child of $(this) $options.hide(); } }); Problem This solution works perfectly until the contents of my table are reloaded or changed via AJAX. Because the <span> was moved from the <body> to be a child of the cell, it gets thrown out and replaced during the AJAX call. So my first thought is to move the <span> back to the body on mouseout of the table cell, like this: else { // On mouseout, $options is moved back to be a child of body $options.appendTo("body"); $options.hide(); } However, with this, the <span> disappears as soon as it is mouseover. The mouseout event seems to be called on _previewable when the mouse moves into the <span>, even though the <span> is a child of _previewable and fully displayed within the boundaries of the _previewable table cell. At this point, I've only tested this in Chrome. Questions Why would mouseout be called on _previewable, when the mouse is still within the boundaries of _previewable? Is it because the <span> is absolutely positioned? How can I make this work, without recreating the <span> and it's click handler on each AJAX table referesh?

    Read the article

  • XSLT Select all nodes containing a specific substing

    - by Mike
    I'm trying to write an XPath that will select certain nodes that contain a specific word. In this case the word is, "Lockwood". The correct answer is 3. Both of these paths give me 3. count(//*[contains(./*,'Lockwood')]) count(BusinessLetter/*[contains(../*,'Lockwood')]) But when I try to output the text of each specific node //*[contains(./*,'Lockwood')][1] //*[contains(./*,'Lockwood')][2] //*[contains(./*,'Lockwood')][3] Node 1 ends up containing all the text and nodes 2 and 3 are blank. Can some one please tell me what's happening or what I'm doing wrong. Thanks. <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="XPathFunctions.xsl"?> <BusinessLetter> <Head> <SendDate>November 29, 2005</SendDate> <Recipient> <Name Title="Mr."> <FirstName>Joshua</FirstName> <LastName>Lockwood</LastName> </Name> <Company>Lockwood &amp; Lockwood</Company> <Address> <Street>291 Broadway Ave.</Street> <City>New York</City> <State>NY</State> <Zip>10007</Zip> <Country>United States</Country> </Address> </Recipient> </Head> <Body> <List> <Heading>Along with this letter, I have enclosed the following items:</Heading> <ListItem>two original, execution copies of the Webucator Master Services Agreement</ListItem> <ListItem>two original, execution copies of the Webucator Premier Support for Developers Services Description between Lockwood &amp; Lockwood and Webucator, Inc.</ListItem> </List> <Para>Please sign and return all four original, execution copies to me at your earliest convenience. Upon receipt of the executed copies, we will immediately return a fully executed, original copy of both agreements to you.</Para> <Para>Please send all four original, execution copies to my attention as follows: <Person> <Name> <FirstName>Bill</FirstName> <LastName>Smith</LastName> </Name> <Address> <Company>Webucator, Inc.</Company> <Street>4933 Jamesville Rd.</Street> <City>Jamesville</City> <State>NY</State> <Zip>13078</Zip> <Country>USA</Country> </Address> </Person> </Para> <Para>If you have any questions, feel free to call me at <Phone>800-555-1000 x123</Phone> or e-mail me at <Email>[email protected]</Email>.</Para> </Body> <Foot> <Closing> <Name> <FirstName>Bill</FirstName> <LastName>Smith</LastName> </Name> <JobTitle>VP of Operations</JobTitle> </Closing> </Foot> </BusinessLetter>

    Read the article

  • Help with specific Regex: need to match multiple instances of multiple formats in a single string.

    - by KevenK
    I apologize for the terrible title...it can be hard to try to summarize an entire situation into a single sentence. Let me start by saying that I'm asking because I'm just not a Regex expert. I've used it a bit here and there, but I just come up short with the correct way to meet the following requirements. The Regex that I'm attempting to write is for use in an XML schema for input validation, and used elsewhere in Javascript for the same purpose. There are two different possible formats that are supported. There is a literal string, which must be surrounded by quotation marks, and a Hex-value string which must be surrounded by braces. Some test cases: "this is a literal string" <-- Valid string, enclosed properly in "s "this should " still be correct" <-- Valid string, "s are allowed within (if possible, this requirement could be forgiven if necessary) "{00 11 22}" <-- Valid string, {}'s allow in strings. Another one that can be forgiven if necessary I am bad output <-- Invalid string, no "s "Some more problemss"you know <-- Invalid string, must be fully contained in "s {0A 68 4F 89 AC D2} <-- Valid string, hex characters enclosed in {}s {DDFF1234} <-- Valid string, spaces are ignored for Hex strings DEADBEEF <-- Invalid string, must be contained in either "s or {}s {0A 12 ZZ} <-- Invalid string, 'Z' is not a valid Hex character To satisfy these general requirements, I had come up with the following Regex that seems to work well enough. I'm still fairly new to Regex, so there could be a huge hole here that I'm missing.: &quot;.+&quot;|\{([0-9]|[a-f]|[A-F]| )+\} If I recall correctly, the XML Schema regex automatically assumes beginning and end of line (^ and $ respectively). So, essentially, this regex accepts any string that starts and ends with a ", or starts and ends with {}s and contains only valid Hexidecimal characters. This has worked well for me so far except that I had forgotten about another (although less common, and thus forgotten) input option that completely breaks my regex. Where I made my mistake: Valid input should also allow a user to separate valid strings (of either type, literal/hex) by a comma. This means that a single string should be able to contain more than one of the above valid strings, separated by commas. Luckily, however, a comma is not a supported character within a literal string (although I see that my existing regex does not care about commas). Example test cases: "some string",{0A F1} <-- Valid {1122},{face},"peanut butter" <-- Valid {0D 0A FF FE},"string",{FF FFAC19 85} <-- Valid (Spaces don't matter in Hex values) "Validation is allowed to break, if a comma is found not separating values",{0d 0a} <-- Invalid, comma is a delimiter, but "Validation is allowed to break" and "if a comma..." are not marked as separate strings with "s hi mom,"hello" <-- Invalid, String1 was not enclosed properly in "s or {}s My thoughts are that it is possible to use commas as a delimiter to check each "section" of the string to match a regex similar to the original, but I just am not that advanced in regex yet to come up with a solution on my own. Any help would be appreciated, but ultimately a final solution with an explanation would just stellar. Thanks for reading this huge wall of text!

    Read the article

  • Webdriver PageObject Implementation using PageFactory in Java

    - by kamal
    here is what i have so far: A working Webdriver based Java class, which logs-in to the application and goes to a Home page: import java.io.File; import java.io.IOException; import java.util.concurrent.TimeUnit; import org.apache.commons.io.FileUtils; import org.openqa.selenium.By; import org.openqa.selenium.OutputType; import org.openqa.selenium.TakesScreenshot; import org.openqa.selenium.WebDriver; import org.openqa.selenium.firefox.FirefoxDriver; import org.openqa.selenium.firefox.FirefoxProfile; import org.testng.AssertJUnit; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; public class MLoginFFTest { private WebDriver driver; private String baseUrl; private String fileName = "screenshot.png"; @BeforeMethod public void setUp() throws Exception { FirefoxProfile profile = new FirefoxProfile(); profile.setPreference("network.http.phishy-userpass-length", 255); profile.setAssumeUntrustedCertificateIssuer(false); driver = new FirefoxDriver(profile); baseUrl = "https://a.b.c.d/"; driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); } @Test public void testAccountLogin() throws Exception { driver.get(baseUrl + "web/certLogon.jsp"); driver.findElement(By.name("logonName")).clear(); AssertJUnit.assertEquals(driver.findElement(By.name("logonName")) .getTagName(), "input"); AssertJUnit.assertEquals(driver.getTitle(), "DA Logon"); driver.findElement(By.name("logonName")).sendKeys("username"); driver.findElement(By.name("password")).clear(); driver.findElement(By.name("password")).sendKeys("password"); driver.findElement(By.name("submit")).click(); driver.findElement(By.linkText("Account")).click(); AssertJUnit.assertEquals(driver.getTitle(), "View Account"); } @AfterMethod public void tearDown() throws Exception { File screenshot = ((TakesScreenshot) driver) .getScreenshotAs(OutputType.FILE); try { FileUtils.copyFile(screenshot, new File(fileName)); } catch (IOException e) { e.printStackTrace(); } driver.quit(); } } Now as we see there are 2 pages: 1. Login page, where i have to enter username and password, and homepage, where i would be taken, once the authentication succeeds. Now i want to implement this as PageObjects using Pagefactory: so i have : package com.example.pageobjects; import static com.example.setup.SeleniumDriver.getDriver; import java.util.concurrent.TimeUnit; import org.openqa.selenium.support.PageFactory; import org.openqa.selenium.support.ui.ExpectedCondition; import org.openqa.selenium.support.ui.FluentWait; import org.openqa.selenium.support.ui.Wait; public abstract class MPage<T> { private static final String BASE_URL = "https://a.b.c.d/"; private static final int LOAD_TIMEOUT = 30; private static final int REFRESH_RATE = 2; public T openPage(Class<T> clazz) { T page = PageFactory.initElements(getDriver(), clazz); getDriver().get(BASE_URL + getPageUrl()); ExpectedCondition pageLoadCondition = ((MPage) page).getPageLoadCondition(); waitForPageToLoad(pageLoadCondition); return page; } private void waitForPageToLoad(ExpectedCondition pageLoadCondition) { Wait wait = new FluentWait(getDriver()) .withTimeout(LOAD_TIMEOUT, TimeUnit.SECONDS) .pollingEvery(REFRESH_RATE, TimeUnit.SECONDS); wait.until(pageLoadCondition); } /** * Provides condition when page can be considered as fully loaded. * * @return */ protected abstract ExpectedCondition getPageLoadCondition(); /** * Provides page relative URL/ * * @return */ public abstract String getPageUrl(); } And for login Page not sure how i would implement that, as well as the Test, which would call these pages.

    Read the article

  • Why might a System.String object not cache its hash code?

    - by Dan Tao
    A glance at the source code for string.GetHashCode using Reflector reveals the following (for mscorlib.dll version 4.0): public override unsafe int GetHashCode() { fixed (char* str = ((char*) this)) { char* chPtr = str; int num = 0x15051505; int num2 = num; int* numPtr = (int*) chPtr; for (int i = this.Length; i > 0; i -= 4) { num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0]; if (i <= 2) { break; } num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1]; numPtr += 2; } return (num + (num2 * 0x5d588b65)); } } Now, I realize that the implementation of GetHashCode is not specified and is implementation-dependent, so the question "is GetHashCode implemented in the form of X or Y?" is not really answerable. I'm just curious about a few things: If Reflector has disassembled the DLL correctly and this is the implementation of GetHashCode (in my environment), am I correct in interpreting this code to indicate that a string object, based on this particular implementation, would not cache its hash code? Assuming the answer is yes, why would this be? It seems to me that the memory cost would be minimal (one more 32-bit integer, a drop in the pond compared to the size of the string itself) whereas the savings would be significant, especially in cases where, e.g., strings are used as keys in a hashtable-based collection like a Dictionary<string, [...]>. And since the string class is immutable, it isn't like the value returned by GetHashCode will ever even change. What could I be missing? UPDATE: In response to Andras Zoltan's closing remark: There's also the point made in Tim's answer(+1 there). If he's right, and I think he is, then there's no guarantee that a string is actually immutable after construction, therefore to cache the result would be wrong. Whoa, whoa there! This is an interesting point to make (and yes it's very true), but I really doubt that this was taken into consideration in the implementation of GetHashCode. The statement "therefore to cache the result would be wrong" implies to me that the framework's attitude regarding strings is "Well, they're supposed to be immutable, but really if developers want to get sneaky they're mutable so we'll treat them as such." This is definitely not how the framework views strings. It fully relies on their immutability in so many ways (interning of string literals, assignment of all zero-length strings to string.Empty, etc.) that, basically, if you mutate a string, you're writing code whose behavior is entirely undefined and unpredictable. I guess my point is that for the author(s) of this implementation to worry, "What if this string instance is modified between calls, even though the class as it is publicly exposed is immutable?" would be like for someone planning a casual outdoor BBQ to think to him-/herself, "What if someone brings an atomic bomb to the party?" Look, if someone brings an atom bomb, party's over.

    Read the article

  • Doxygen for C++ template class member specialization

    - by Ziv
    When I write class templates, and need to fully-specialize members of those classes, Doxygen doesn't recognize the specialization - it documents only the generic definition, or (if there are only specializations) the last definition. Here's a simple example: ===MyClass.hpp=== #ifndef MYCLASS_HPP #define MYCLASS_HPP template<class T> class MyClass{ public: static void foo(); static const int INT_CONST; static const T TTYPE_CONST; }; /* generic definitions */ template<class T> void MyClass<T>::foo(){ printf("Generic foo\n"); } template<class T> const int MyClass<T>::INT_CONST = 5; /* specialization declarations */ template<> void MyClass<double>::foo(); template<> const int MyClass<double>::INT_CONST; template<> const double MyClass<double>::TTYPE_CONST; template<> const char MyClass<char>::TTYPE_CONST; #endif === MyClass.cpp === #include "MyClass.hpp" /* specialization definitions */ template<> void MyClass<double>::foo(){ printf("Specialized double foo\n"); } template<> const int MyClass<double>::INT_CONST = 10; template<> const double MyClass<double>::TTYPE_CONST = 3.141; template<> const char MyClass<char>::TTYPE_CONST = 'a'; So in this case, foo() will be documented as printing "Generic foo," INT_CONST will be documented as set to 5, with no mention of the specializations, and TTYPE_CONST will be documented as set to 'a', with no mention of 3.141 and no indication that 'a' is a specialized case. I need to be able to document the specializations - either within the documentation for MyClass<T>, or on new pages for MyClass<double>, MyClass<char>. How do I do this? Can Doxygen even handle this? Am I possibly doing something wrong in the declarations/code structure that's keeping Doxygen from understanding what I want? I should note two related cases: A) For templated functions, specialization works fine, e.g.: /* functions that are global/in a namespace */ template<class T> void foo(){ printf("Generic foo\n"); } template<> void foo<double>(){ printf("Specialized double foo\n"); } This will document both foo() and foo(). B) If I redeclare the entire template, i.e. template<> class MyClass<double>{...};, then MyClass<double> will get its own documentation page, as a seperate class. But this means actually declaring an entirely new class - there is no relation between MyClass<T> and MyClass<double> if MyClass<double> itself is declared. So I'd have to redeclare the class and all its members, and repeat all the definitions of class members, specialized for MyClass<double>, all to make it appear as though they're using the same template. Very awkward, feels like a kludge solution. Suggestions? Thanks much :) --Ziv

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • How to invalidate nginx reverse proxy cache in front of other nginx servers?

    - by Olivier Lance
    I'm running a Proxmox server on a single IP address, that will dispatch HTTP requests to containers depending on the requested host. I am using nginx on the Proxmox side to listen to HTTP requests and I am using the proxy_pass directive in my different server blocks to dispatch requests according to the server_name. My containers run on Ubuntu and are also running a nginx instance. I'm having troubles with caching on a particular website that is fully static: nginx keeps on serving me stale content after files updates, until I: Clear /var/cache/nginx/ and restart nginx or set proxy_cache off for this server and reload the config Here's the detail of my configuration: On the server (proxmox): /etc/nginx/nginx.conf: user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; use epoll; } http { ## # Basic Settings ## sendfile on; #tcp_nopush on; tcp_nodelay on; #keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; client_body_buffer_size 1k; client_max_body_size 8m; large_client_header_buffers 1 1K; ignore_invalid_headers on; client_body_timeout 5; client_header_timeout 5; keepalive_timeout 5 5; send_timeout 5; server_name_in_redirect off; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; # gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; limit_conn_zone $binary_remote_addr zone=gulag:1m; limit_conn gulag 50; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/conf.d/proxy.conf: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_hide_header X-Powered-By; proxy_intercept_errors on; proxy_buffering on; proxy_cache_key "$scheme://$host$request_uri"; proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m; /etc/nginx/sites-available/my-domain.conf: server { listen 80; server_name .my-domain.com; access_log off; location / { proxy_pass http://my-domain.local:80/; proxy_cache cache; proxy_cache_valid 12h; expires 30d; proxy_cache_use_stale error timeout invalid_header updating; } } On the container (my-domain.local): nginx.conf: (everything is inside the main config file -- it's been done quickly...) user www-data; worker_processes 1; error_log logs/error.log; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip off; server { listen 80; server_name .my-domain.com; root /var/www; access_log logs/host.access.log; } } I've read many blog posts and answers before resolving to posting my own questions... most answers I can see suggest setting sendfile off; but that didn't work for me. I have tried many other things, double checked my settings and all seems fine. So I'm wondering whether I am not expecting nginx's cache to do something it's not meant to...? Basically, I thought that if one of my static files in my container was updated, the cache in my reverse proxy would be invalidated and my browser would get the new version of the file when it requests it... But I now have the sentiment I misunderstood many things. Of all things, I now wonder how nginx on the server can know about a file in the container has changed? I have seen a directive proxy_header_pass (or something alike), should I use this to let the nginx instance from the container somehow inform the one in Proxmox about updated files? Is this expectation just a dream, or can I do it with nginx on my current architecture?

    Read the article

  • Source code versioning with comments (organizational practice) - leave or remove?

    - by ADTC
    Before you start admonishing me with "DON'T DO IT," "BAD PRACTICE!" and "Learn to use proper source code control", please hear me out first. I am fully aware that the practice of commenting out old code and leaving it there forever is very bad and I hate such practice myself. But here's the situation I'm in. A few months ago I joined a company as software developer. I had worked in the company for few months as an intern, about a year before joining recently. Our company uses source code version control (CVS) but not properly. Here's what happened both in my internship and my current permanent position. Each time I was assigned to work on a project (legacy, about 8-10 years old). Instead of creating a CVS account and letting me check out code and check in changes, a senior colleague exported the code from CVS, zipped it up and passed it to me. While this colleague checks in all changes in bulk every few weeks, our usual practice is to do fine-grained versioning in the actual source code itself (each file increments in versions independent from the rest). Whenever a change is made to a file, old code is commented out, new code entered below it, and this whole section is marked with a version number. Finally a note about the changes is placed at the top of the file in a section called Modification History. Finally the changed files are placed in a shared folder, ready and waiting for the bulk check-in. /* * Copyright notice blah blah * Some details about file (project name, file name etc) * Modification History: * Date Version Modified By Description * 2012-10-15 1.0 Joey Initial creation * 2012-10-22 1.1 Chandler Replaced old code with new code */ code .... //v1.1 start //old code new code //v1.1 end code .... Now the problem is this. In the project I'm working on, I needed to copy some new source code files from another project (new in the sense that they didn't exist in destination project before). These files have a lot of historical commented out code and comment-based versioning including usually long or very long Modification History section. Since the files are new to this project I decided to clean them up and remove unnecessary code including historical code, and start fresh at version 1.0. (I still have to continue the practice of comment-based versioning despite hating it. And don't ask why not start at version 0.1...) I have done similar something during my internship and no one said anything. My supervisor has seen the work a few times and didn't say I shouldn't do such clean-up (if at all it was noticed). But a same-level colleague saw this and said it's not recommended as it may cause downtime in the future and increase maintenance costs. An example is when changes are made in another project on the original files and these changes need to be propagated to this project. With code files drastically different, it could cause confusion to an employee doing the propagation. It makes sense to me, and is a valid point. I couldn't find any reason to do my clean-up other than the inconvenience of a ridiculously messy code. So, long story short: Given the practice in our company, should I not do such clean-up when copying new files from project to project? Is it better to make changes on the (copy of) original code with full history in comments? Or what justification can I give for doing the clean-up? PS to mods: Hope you allow this question some time even if for any reason you determine it to be unfit in SO. I apologize in advance if anything is inappropriate including tags.

    Read the article

  • Refactoring a leaf class to a base class, and keeping it also a interface implementation

    - by elcuco
    I am trying to refactor a working code. The code basically derives an interface class into a working implementation, and I want to use this implementation outside the original project as a standalone class. However, I do not want to create a fork, and I want the original project to be able to take out their implementation, and use mine. The problem is that the hierarchy structure is very different and I am not sure if this would work. I also cannot use the original base class in my project, since in reality it's quite entangled in the project (too many classes, includes) and I need to take care of only a subdomain of the problems the original project is. I wrote this code to test an idea how to implement this, and while it's working, I am not sure I like it: #include <iostream> // Original code is: // IBase -> Derived1 // I need to refactor Derive2 to be both indipendet class // and programmers should also be able to use the interface class // Derived2 -> MyClass + IBase // MyClass class IBase { public: virtual void printMsg() = 0; }; /////////////////////////////////////////////////// class Derived1 : public IBase { public: virtual void printMsg(){ std::cout << "Hello from Derived 1" << std::endl; } }; ////////////////////////////////////////////////// class MyClass { public: virtual void printMsg(){ std::cout << "Hello from MyClass" << std::endl; } }; class Derived2: public IBase, public MyClass{ virtual void printMsg(){ MyClass::printMsg(); } }; class Derived3: public MyClass, public IBase{ virtual void printMsg(){ MyClass::printMsg(); } }; int main() { IBase *o1 = new Derived1(); IBase *o2 = new Derived2(); IBase *o3 = new Derived3(); MyClass *o4 = new MyClass(); o1->printMsg(); o2->printMsg(); o3->printMsg(); o4->printMsg(); return 0; } The output is working as expected (tested using gcc and clang, 2 different C++ implementations so I think I am safe here): [elcuco@pinky ~/src/googlecode/qtedit4/tools/qtsourceview/qate/tests] ./test1 Hello from Derived 1 Hello from MyClass Hello from MyClass Hello from MyClass [elcuco@pinky ~/src/googlecode/qtedit4/tools/qtsourceview/qate/tests] ./test1.clang Hello from Derived 1 Hello from MyClass Hello from MyClass Hello from MyClass The question is My original code was: class Derived3: public MyClass, public IBase{ virtual void IBase::printMsg(){ MyClass::printMsg(); } }; Which is what I want to express, but this does not compile. I must admit I do not fully understand why this code work, as I expect that the new method Derived3::printMsg() will be an implementation of MyClass::printMsg() and not IBase::printMsg() (even tough this is what I do want). How does the compiler chooses which method to re-implement, when two "sister classes" have the same virtual function name? If anyone has a better way of implementing this, I would like to know as well :)

    Read the article

  • Little Regular Expression (against HTML) help

    - by Marcos Placona
    Hi, I have the following HTML <p>Some text <a title="link" href="http://link.com/" target="_blank">my link</a> more text <a title="link" href="http://link.com/" target="_blank">more link</a>.</p> <p>Another paragraph.</p> <p>[code:cf]</p> <p>&lt;cfset ArrFruits = ["Orange", "Apple", "Peach", "Blueberry", </p> <p>"Blackberry", "Strawberry", "Grape", "Mango", </p> <p>"Clementine", "Cherry", "Plum", "Guava", </p> <p>"Cranberry"]&gt;</p> <p>[/code]</p> <p>Another line</p> <p><img src="http://image.jpg" alt="Array" /> </p> <p>More text</p> <p>[code:cf]</p> <p>&lt;table border="1"&gt;</p> <p> &lt;cfoutput&gt;</p> <p> &lt;cfloop array="#GroupsOf(ArrFruits, 5)#" index="arrFruitsIX"&gt;</p> <p>  &lt;tr&gt;</p> <p> &lt;cfloop array="#arrFruitsIX#" index="arrFruit"&gt;</p> <p>     &lt;td&gt;#arrFruit#&lt;/td&gt;</p> <p> &lt;/cfloop&gt;</p> <p>  &lt;/tr&gt;</p> <p> &lt;/cfloop&gt;</p> <p> &lt;/cfoutput&gt;</p> <p>&lt;/table&gt;</p> <p>[/code]</p> <p>With an output that looks like:</p> <p><img src="another_image.jpg" alt="" width="342" height="85" /></p> What I'm trying to do, is write a regular expression that will remove all the or , and whenever it finds a , it will replace it with a line-break. So far, my pattern looks like this: /\<p\>(.*?)(<\/p>)/g And I'm replacing the matches with: $1\n It all looks good, but it's also replacing the contents inside the [code][/code] tags, which in this case should not replace the tags at all, so as a result, i would lkike to get rid of the tags, when the content isn't inside the [code] tags. I can't ever get negation right, I know it will be something along the lines of \<p\>^\[code*\](.*?)(<\/p>) But obviously this doesn't work :-) Could anyone please lend me a hand with this regex? BTW, I know I shouldn't be using regular expressions to parse HTML at all. I'm fully aware of that, but still, for this specific case, I'd like to use regex. Thanks in advance

    Read the article

  • IOException during blocking network NIO in JDK 1.7

    - by Bass
    I'm just learning NIO, and here's the short example I've written to test how a blocking NIO can be interrupted: class TestBlockingNio { private static final boolean INTERRUPT_VIA_THREAD_INTERRUPT = true; /** * Prevent the socket from being GC'ed */ static Socket socket; private static SocketChannel connect(final int port) { while (true) { try { final SocketChannel channel = SocketChannel.open(new InetSocketAddress(port)); channel.configureBlocking(true); return channel; } catch (final IOException ioe) { try { Thread.sleep(1000); } catch (final InterruptedException ie) { } continue; } } } private static byte[] newBuffer(final int length) { final byte buffer[] = new byte[length]; for (int i = 0; i < length; i++) { buffer[i] = (byte) 'A'; } return buffer; } public static void main(final String args[]) throws IOException, InterruptedException { final int portNumber = 10000; new Thread("Reader") { public void run() { try { final ServerSocket serverSocket = new ServerSocket(portNumber); socket = serverSocket.accept(); /* * Fully ignore any input from the socket */ } catch (final IOException ioe) { ioe.printStackTrace(); } } }.start(); final SocketChannel channel = connect(portNumber); final Thread main = Thread.currentThread(); final Thread interruptor = new Thread("Inerruptor") { public void run() { System.out.println("Press Enter to interrupt I/O "); while (true) { try { System.in.read(); } catch (final IOException ioe) { ioe.printStackTrace(); } System.out.println("Interrupting..."); if (INTERRUPT_VIA_THREAD_INTERRUPT) { main.interrupt(); } else { try { channel.close(); } catch (final IOException ioe) { System.out.println(ioe.getMessage()); } } } } }; interruptor.setDaemon(true); interruptor.start(); final ByteBuffer buffer = ByteBuffer.allocate(32768); int i = 0; try { while (true) { buffer.clear(); buffer.put(newBuffer(buffer.capacity())); buffer.flip(); channel.write(buffer); System.out.print('X'); if (++i % 80 == 0) { System.out.println(); Thread.sleep(100); } } } catch (final ClosedByInterruptException cbie) { System.out.println("Closed via Thread.interrupt()"); } catch (final AsynchronousCloseException ace) { System.out.println("Closed via Channel.close()"); } } } In the above example, I'm writing to a SocketChannel, but noone is reading from the other side, so eventually the write operation hangs. This example works great when run by JDK-1.6, with the following output: Press Enter to interrupt I/O XXXX Interrupting... Closed via Thread.interrupt() — meaning that only 128k of data was written to the TCP socket's buffer. When run by JDK-1.7 (1.7.0_25-b15 and 1.7.0-u40-b37), however, the very same code bails out with an IOException: Press Enter to interrupt I/O XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXX Exception in thread "main" java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) at com.example.TestBlockingNio.main(TestBlockingNio.java:109) Can anyone explain this different behaviour?

    Read the article

  • Recursively created linked lists with a class, C++

    - by Jon Brant
    I'm using C++ to recursively make a hexagonal grid (using a multiply linked list style). I've got it set up to create neighboring tiles easily, but because I'm doing it recursively, I can only really create all 6 neighbors for a given tile. Obviously, this is causing duplicate tiles to be created and I'm trying to get rid of them in some way. Because I'm using a class, checking for null pointers doesn't seem to work. It's either failing to convert from my Tile class to and int, or somehow converting it but not doing it properly. I'm explicitly setting all pointers to NULL upon creation, and when I check to see if it still is, it says it's not even though I never touched it since initialization. Is there a specific way I'm supposed to do this? I can't even traverse the grid without NULLs of some kind Here's some of my relevant code. Yes, I know it's embarassing. Tile class header: class Tile { public: Tile(void); Tile(char *Filename); ~Tile(void); void show(void); bool LoadGLTextures(); void makeDisplayList(); void BindTexture(); void setFilename(char *newName); char Filename[100]; GLuint texture[2]; GLuint displayList; Tile *neighbor[6]; float xPos, yPos,zPos; };` Tile Initialization: Tile::Tile(void) { xPos=0.0f; yPos=0.0f; zPos=0.0f; glEnable(GL_DEPTH_TEST); strcpy(Filename, strcpy(Filename, "Data/BlueTile.bmp")); if(!BuildTexture(Filename, texture[0]))MessageBox(NULL,"Texture failed to load!","Crap!",MB_OK|MB_ICONASTERISK); for(int x=0;x<6;x++) { neighbor[x]=NULL; } } Creation of neighboring tiles: void MakeNeighbors(Tile *InputTile, int stacks) { for(int x=0;x<6;x++) { InputTile->neighbor[x]=new Tile();InputTile->neighbor[x]->xPos=0.0f;InputTile->neighbor[x]->yPos=0.0f;InputTile->zPos=float(stacks); } if(stacks) { for(int x=0;x<6;x++)MakeNeighbors(InputTile->neighbor[x],stacks-1); } } And finally, traversing the grid: void TraverseGrid(Tile *inputTile) { Tile *temp; for(int x=0;x<6;x++) if(inputTile->neighbor[x]) { temp=inputTile->neighbor[x]; temp->xPos=0.0f; TraverseGrid(temp); //MessageBox(NULL,"Not Null!","SHUTDOWN ERROR",MB_OK | MB_ICONINFORMATION); } } The key line is "if(inputTile-neighbor[x])" and whether I make it "if(inputTile-neighbor[x]==NULL)" or whatever I do, it just isn't handling it properly. Oh and I'm also aware that I haven't set up the list fully. It's only one direction now.

    Read the article

  • RavenDB - Can I create/query an index, from an application that doesn't share the CLR objects that were persisted?

    - by Jaz Lalli
    This post goes some way to answering this question (I'll include the answer later), but I was hoping for some further details. We have a number of applications that each need to access/manipulate data from Raven in their own way. Data is only written via the main web application. Other apps include batch-style tasks, reporting etc. In an attempt to keep each of these as de-coupled as possible, they are separate solutions. That being the case, how can I, from the reporting application, create indexes over the existing data, using my locally defined types? The answer from the linked question states As long as the structure of the classes you are deserializing into partially matches the structure of the data, it shouldn't make a difference. The RavenDB server doesn't care at all what classes you use in the client. You certainly could share a dll, or even share a portable dll if you are targeting a different platform. But you are correct that it is not necessary. However, you should be aware of the Raven-Clr-Type metadata value. The RavenDB client sets this when storing the original document. It is consumed back by the client to assist with deserialization, but it is not fully enforced It's the first part of that that I wanted clarification on. Do the object graphs for the docs on the server and types in my application have to match exactly? If the Click document on the server is { "Visit": { "Version": "0", "Domain": "www.mydomain.com", "Page": "/index", "QueryString": "", "IPAddress": "127.0.0.1", "Guid": "10cb6886-cb5c-46f8-94ed-4b0d45a5e9ca", "MetaData": { "Version": "1", "CreatedDate": "2012-11-09T15:11:03.5669038Z", "UpdatedDate": "2012-11-09T15:11:03.5669038Z", "DeletedDate": null } }, "ResultId": "Results/1", "ProductCode": "280", "MetaData": { "Version": "1", "CreatedDate": "2012-11-09T15:14:26.1332596Z", "UpdatedDate": "2012-11-09T15:14:26.1332596Z", "DeletedDate": null } } Is it possible (and if so, how?), to create a Map index from my application, which defines the Click class as follows? class Click { public Guid Guid {get;set;} public int ProductCode {get;set;} public DateTime CreatedDate {get;set;} } Or would my class have to be look like this? (where the custom types are defined as a sub-set of the properties on the document above, with matching property names) class Click { public Visit Visit {get;set;} public int ProductCode {get;set;} public MetaData MetaData {get;set;} } UPDATE Following on from the answer below, here's the code I managed to get working. Index public class Clicks_ByVisitGuidAndProductCode : AbstractIndexCreationTask { public override IndexDefinition CreateIndexDefinition() { return new IndexDefinition { Map = "from click in docs.Clicks select new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate}", TransformResults = "results.Select(click => new {Guid = click.Visit.Guid, ProductCode = click.ProductCode, CreatedDate = click.MetaData.CreatedDate})" }; } } Query var query = _documentSession.Query<ReportClick, Clicks_ByVisitGuidAndProductCode>() .Customize(x => x.WaitForNonStaleResultsAsOfNow()) .Where(x => x.CreatedDate >= start.Date && x.CreatedDate < end.Date); where Click is public class Click { public Guid Guid { get; set; } public int ProductCode { get; set; } public DateTime CreatedDate { get; set; } } Many thanks @MattJohnson.

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Iphone: controlling text with delay problem with UIWebView

    - by James B.
    Hi, I've opted to use UIWebView so I can control the layout of text I've got contained in a local html document in my bundle. I want the text to display within a UIWebView I've got contained within my view. So the text isn't the whole view, just part of it. Everything runs fine, but when the web page loads I get a blank screen for a second before the text appears. This looks really bad. can anyone give me an example of how to stop this happening? I'm assuming I have to somehow hide the web view until it has fully loaded? Could someone one tell me how to do this? At the moment I'm calling my code through the viewDidLoad like this: [myUIWebView loadRequest: [NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"localwebpage" ofType:@"html"] isDirectory:NO]]]; Any help is much appreciated. I've read round a few forums and not seen a good answer to this question, and it seems like it recurs a lot as an issue for beginners like myself. Thanks for taking the time to read this post! UPDATED info Thanks for your response. The suggestions below solves the problem but creates a new one for me as now when my view loads it is totally hidden until I click on my toggle switch. to understand this it's maybe most helpful if I post all my code. Before this though let me explain the setup of my view. I've got a standard view within which I've also got two web views, one on top of the other. each web view contains different text with different styling. the user flicks between views using a toggle switch, which hides/reveals the web views. I'm using the web views because I want to control the style/layout of the text. Below is my full .m code, I can't figure out where it's going wrong. My web views are called oxford & harvard I'm sure its something to do with how/when I'm hiding/revealing views. I've played around with this but can't seem to get it right. Maybe my approach is wrong. A bit of advice ironing this out would be really appreciated: @implementation ReferenceViewController @synthesize oxford; @synthesize harvard; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; [oxford loadRequest: [NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"Oxford" ofType:@"html"] isDirectory:NO]]]; [harvard loadRequest: [NSURLRequest requestWithURL:[NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:@"Harvard" ofType:@"html"] isDirectory:NO]]]; [oxford setHidden:YES]; [harvard setHidden:YES]; } - (void)webViewDidFinishLoad:(UIWebView *)webView { if([webView hidden]) { [oxford setHidden:NO]; [harvard setHidden:NO]; } } //Toggle controls for toggle switch in UIView to swap between webviews - (IBAction)toggleControls:(id)sender { if ([sender selectedSegmentIndex] == kSwitchesSegmentIndex) { oxford.hidden = NO; harvard.hidden = YES; } else { oxford.hidden = YES; harvard.hidden = NO; } } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; [oxford release]; [harvard release]; } @end

    Read the article

  • Exit code 3 (not my return value, looking for source)

    - by Kathoz
    Greetings, my program exits with the code 3. No error messages, no exceptions, and the exit is not initiated by my code. The problem occurs when I am trying to read extremely long integer values from a text file (the text file is present and correctly opened, with successful prior reading). I am using very large amounts of memory (in fact, I think that this might be the cause, as I am nearly sure I go over the 2GB per process memory limit). I am also using the GMP (or, rather, MPIR) library to multiply bignums. I am fairly sure that this is not a file I/O problem as I got the same error code on a previous program version that was fully in-memory. System: MS Visual Studio 2008 MS Windows Vista Home Premium x86 MPIR 2.1.0 rc2 4GB RAM Where might this error code originate from? EDIT: this is the procedure that exits with the code void condenseBinSplitFile(const char *sourceFilename, int partCount){ //condense results file into final P and Q std::string tempFilename; std::string inputFilename(sourceFilename); std::string outputFilename(BIN_SPLIT_FILENAME_DATA2); mpz_class *P = new mpz_class(0); mpz_class *Q = new mpz_class(0); mpz_class *PP = new mpz_class(0); mpz_class *QQ = new mpz_class(0); FILE *sourceFile; FILE *resultFile; fpos_t oldPos; int swapCount = 0; while (partCount > 1){ std::cout << partCount << std::endl; sourceFile = fopen(inputFilename.c_str(), "r"); resultFile = fopen(outputFilename.c_str(), "w"); for (int i=0; i<partCount/2; i++){ //Multiplication order: //Get Q, skip P //Get QQ, mul Q and QQ, print Q, delete Q //Jump back to P, get P //Mul P and QQ, delete QQ //Skip QQ, get PP //Mul P and PP, delete P and PP //Get Q, skip P mpz_inp_str(Q->get_mpz_t(), sourceFile, CALC_BASE); fgetpos(sourceFile, &oldPos); skipLine(sourceFile); skipLine(sourceFile); //Get QQ, mul Q and QQ, print Q, delete Q mpz_inp_str(QQ->get_mpz_t(), sourceFile, CALC_BASE); (*Q) *= (*QQ); mpz_out_str(resultFile, CALC_BASE, Q->get_mpz_t()); fputc('\n', resultFile); (*Q) = 0; //Jump back to P, get P fsetpos(sourceFile, &oldPos); mpz_inp_str(P->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and QQ, delete QQ (*P) *= (*QQ); (*QQ) = 0; //Skip QQ, get PP skipLine(sourceFile); skipLine(sourceFile); mpz_inp_str(PP->get_mpz_t(), sourceFile, CALC_BASE); //Mul P and PP, delete PP, print P, delete P (*P) += (*PP); (*PP) = 0; mpz_out_str(resultFile, CALC_BASE, P->get_mpz_t()); fputc('\n', resultFile); (*P) = 0; } partCount /= 2; fclose(sourceFile); fclose(resultFile); //swap filenames tempFilename = inputFilename; inputFilename = outputFilename; outputFilename = tempFilename; swapCount++; } delete P; delete Q; delete PP; delete QQ; remove(BIN_SPLIT_FILENAME_RESULTS); if (swapCount%2 == 0) rename(sourceFilename, BIN_SPLIT_FILENAME_RESULTS); else rename(BIN_SPLIT_FILENAME_DATA2, BIN_SPLIT_FILENAME_RESULTS); }

    Read the article

  • Clearcase - selective merge.

    - by Keshav
    Hi, I have a peculiar Clearcase doubt. I cannot fully describe why I'm doing such a confusing architecture, but I need to do it (thanks to the mistake done by someone long back). Ok, here's a bit of detail: B1 is a contaminated branch where both my group's changes and another group's changes got mixed together so badly that there is no way of finding which code is whose). So the solution proposed is to create a new branch called B2 (at the same level as B1) and put all the unmodified code of the other group on it (The way to do that would be to merge B1 with B2 and then go about removing all changes from it till it becomes original). Then create a CR branch on B1 and keep only my group's newly added files or modified files on that branch. Finally create an integration branch out of B2 and merge the changes from CR branch of B1 to integration branch of B2. So here is what I did: (The use case is where I have dir D where file a, b and c are there. My group ended up modifying file a while b and c are not modified at all). There is a branch B1 on which there are files a, b and c. There is another branch B2. A merge is done from B1 to B2. Now B2 also has a, b and c. At this point both branch B1 and B2 are same. Now I delete file a from branch B2 (rmname). Now B2 has b and c only. I put a label to this branch called Label1. This makes the code with label Label1 as the unmodified code from other group. Now I create a sub branch called CR1 from B1 and delete all the files that are there in B2 branch (i.e b and c) such that it contains only the modified code from original code on it. In my case it is file a. At this point branch B2 with label Label1 has files b and c (those are unmodified code) and branch CR1 coming off B1 has only a (that is modified by us). Now I create another branch called integration branch that comes off B2 Label1. And then I do a merge of CR branch on to that expecting that it will have all three files a, b and c for me. All I'd need to do is to do a version tree view and see who modified what. But the problem I face is that since I had done a rmname of file a on branch B2 earlier to putting Label. The merge does not really take the file a from CR branch. How to I get around that problem. I want to selectively merge. Is it possible? sorry if it is a bad design. I'm not really conversant with Clear case and have limited options and time to clear some one else's mess.

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >