Search Results

Search found 18803 results on 753 pages for 'link'.

Page 79/753 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Why can't I create soft link on vboxsf file system?

    - by Artem Ice
    ln -s keeps saying me that file system is read-only, however it is not. ice@distantstar:~/virt ? touch file ice@distantstar:~/virt ? rm file ice@distantstar:~/virt ? ln -s ~/.bashrc ~/virt/.bashrc ln: failed to create symbolic link `/home/ice/virt/.bashrc': Read-only file system ice@distantstar:~/virt ? mount | grep virt none on /home/ice/virt type vboxsf (rw,nodev,relatime) ice@distantstar:~/virt ? cat /etc/fstab | grep virt VIRT /home/ice/virt vboxsf rw 0 0

    Read the article

  • In CSS, want to override my a:link and a:hover directives to for a specific span

    - by brendan
    This will probably be a softball for you CSS folks... I have a site like this: <div id="header"> <span class="myheader">This is the name of my awesome site!!!!</span> </div> <div id="content">whole bunch of other stuff</div> <div="sidemenu"><ul><li>something</li><li>something else</li></ul> <div id="footer">Some footer stuff will go here....</div> In my css I have some directives to format the hyperlinks: a:link { text-decoration: none; color : #ff6600; border: 0px; -moz-outline-style: none;} a:active { text-decoration: underline; color : #ff6600; border: 0px; -moz-outline-style: none;} a:visited { text-decoration: none; color : #ff6600; border: 0px; -moz-outline-style: none;} a:hover { text-decoration: underline; color : #000; border: 0px; -moz-outline-style: none;} a:focus { outline: none;-moz-outline-style: none;} Now here is the problem. In my header I have some text that is a link, but I do not want to to format it like all the other links in the site. So basically I want my a:link, a:hover, etc to ignore anything in the "header" div. How can I do this? Assume I need to override this for that div/span?

    Read the article

  • Codeigniter achor producing dodgy link in email inbox.. what could the problem be?

    - by Psychonetics
    My application is emailing out fine but the email I receive displays incorrectly. Rather than have text and a simple "click here to activate" link it doesn't. it shows this instead: Hi user1, please click the following link to activate your account <a href="http://mysite.com/activation/fzyZuyxVAzZS2koVg5UFjfVjlcLNcrzp">ssss</a> Here is the code from my model that sends email to user when they request activation email. $this->load->library('email'); $this->email->from('[email protected]', 'my site'); $this->email->to($result[0]->email); $this->email->subject('my site - Activate your account'); $this->email->message('Hi ' . $result[0]->first_name . ', please click the following link to activate your account ' . anchor('http://mysite.com/activation/' . $new_activation_code, 'click here to activate')); $this->email->send(); Also the mail always ends up in my spam folder.

    Read the article

  • How to link processing power of old computers together?

    - by redIago
    Hey all, I'm sitting on 8 old computers of varied sorts that are more or less useless at this point for any other purpose really. Is there a way I could link their hardware or processing power or whatever together over wifi and use one as like a central computer? Like it would be cool to distribute the processing of some video game or encryption generating program over the collective computers. Any way to do all this? Thanks.

    Read the article

  • How can I set my TP-Link TL-WR1043ND to extend my router - modem range?

    - by Pitto
    I'd like to extend my WiFi coverage, so I've bought the TP-Link TL-WR1043ND and updated its firmware to the latest (wr1043nv1_en_3_13_4_up(110429)) but I can't find how to use its WDS function. Reading further on Super User I understand that both the modem-router (Pirelli Alice Gate) and the TL-WR1043ND should support WDS. Are there any tricks to achieve the same result - extending my WiFi range - even changing the firmware to DD-WRT or Tomato etc?

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • Add Source file link to the default ASP.NET Server Error page?

    - by Max Schilling
    Has anyone ever thought to attempt to modify the default ASP.NET Server error page to provide a link BACK to the error source in Visual Studio? Consider the following standard error page in ASP.NET: Server Error in '/myproject' Application. Invalid object name 'usp_DoSomething'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'usp_DoSomething'. Source Error: Line 4323: cmd.CommandText = "usp_DoSomething"; Line 4324: Line 4325: using (var dr = cmd.ExecuteReader()) Line 4326: { Line 4327: if (dr != null) Source File: c:\development\myproject\myproject.components\providers\sql\sqldataprovider.cs Line: 4325 When an error like this is generated, the HTML has the source back to the file the error occurs in and the line number. Has anyone ever written or thought of writing some mechanism to turn the text into a link back to the error in Visual Studio? I've never seen anything that does it, but it just seems like it would be a helluva nice feature and I think about it in the back of my mind every time an error occurs when I have to manually go find it in the source. It would just be nice to be able to click a link to take me straight there. Anyone written any, or know of any solutions for this. I use Chrome or Firefox as my browsers of choice, but I'd even consider using IE again if someone found a plugin that did this. Thanks, Max

    Read the article

  • Troubleshooting multiple GET variables In PHP

    - by V413HAV
    This may be a very simple question but I don't what's the wrong thing am doing here... To explain you clearly, I've set a real simple example below: <ul> <li><a href="test.php?link1=true">Link 1</a></li> <li><a href="test.php?link2=true">Link 2</a></li> <li><a href="test.php?link3=true">Link 3</a></li> </ul> <?php if(isset($_GET['link1'])) { if(($_GET['link1']) == 'true') { echo 'This Is Link 1'; } } if(isset($_GET['link2'])) { if(($_GET['link2']) == 'true') { echo 'This Is Link 2'; } } if(isset($_GET['link3'])) { if(($_GET['link3']) == 'true') { echo 'This Is Link 3'; } } ?> This is a test.php page, here I've set 3 different arguments for $_GET, and show contents accordingly, now everything works perfect, the only thing am not understanding is how to block this kind of url say if user clicks on link 1 the url will be : http://localhost/test.php?link1=true And the Output of this url is This is Link 1 Now if I change this url to : http://localhost/test.php?link3=true&link2=true&link1=true And the Output what I get is This Is Link 1This Is Link 2This Is Link 3 Now this is ok here, but it's very annoying if someone types this and see's forms one below the other, any way I can stop this tampering?

    Read the article

  • str_replace between two numerical strings

    - by user330840
    Another problem with str_replace, I would like to change the following $title data into URL by taking the $string between number in the beginning and after dash (-) Chicago's Public Schools - $10.3M New Jersey - $3M Michigan: Public Health - $1M The desire output is: chicago-public-school new-jersey michigan-public-health PHP code I am using $title = ucwords(strtolower(strip_tags(str_replace("1: ","",$title)))); $x=1; while($x <= 10) { $title = ucwords(strtolower(strip_tags(str_replace("$x: ","",$title)))); $x++; } $link = preg_replace('/[<>()!#?:.$%\^&=+~`*&#233;"\']/', '',$title); $money = str_replace(" ","-",$link); $link = explode(" - ",$link); $link = preg_replace(" (\(.*?\))", "", $link[0]); $amount = preg_replace(" (\(.*?\))", "", $link[1]); $code_entities_match = array( '&#39;s' ,'&quot;' ,'!' ,'@' ,'#' ,'$' ,'%' ,'^' ,'&' ,'*' ,'(' ,')' ,'+' ,'{' ,'}' ,'|' ,':' ,'"' ,'<' ,'>' ,'?' ,'[' ,']' ,'' ,';' ,"'" ,',' ,'.' ,'_' ,'/' ,'*' ,'+' ,'~' ,'`' ,'=' ,' ' ,'---' ,'--','--'); $code_entities_replace = array('' ,'-' ,'-' ,'' ,'' ,'' ,'-' ,'-' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'-' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'-' ,'' ,'-' ,'-' ,'' ,'' ,'' ,'' ,'' ,'-' ,'-' ,'-','-'); $link = str_replace($code_entities_match, $code_entities_replace, $link); $link = strtolower($link); Unfortunately the result I got: -chicagoamp9s-public-school 2-new-jersey 3-michigan-public-health Anyone has a better solution for this? Thanks guys! (the &#39; changed into amp9 - wonder why?)

    Read the article

  • Why would gnu ld link order causes Signal 11 (SEGV) on startup?

    - by Benoit
    We are building a large application in C++ that includes the use of many (static) libraries. We have a problem where the application crashes on startup with a Signal 11, before we even reach main. After much debugging, we have observed that if we explicitly reference an object file so its link order is early, the program crashes on startup. If the file is referenced later (or not referenced at all), the program does not crash. To be clear, there is NO code invoked directly from this object file. However, as it is C++, there might be static objects that do get constructed (it's a CORBA IDL generated file). We use the -Wl,--start-group ... --end-group arguments to multi-pass link the symbols since the libraries are interdependent. Here is a representation of what I mean. This is what the linker's object file order is: Order 1 Order 2 Order 3 foo.o foo.o foo.o ... ... ... main.o main.o main.o crasher.o libA.o libA.o libA.o LibB.o LibB.o LibB.o LibC.o LibC.o LibC.o crasher.o Results: NO CRASH NO CRASH CRASH Does any one have an idea why the linkage order has an effect on the crash? It would be nice if we could force the crasher.o to link later, but we're really after an explanation. Also, is there a way to force the linker to place crasher.o towards the end? Just to add to the fun, in actuality, crasher.o is part of a Library in the --start/--end-group.

    Read the article

  • Ping "replies" from same computer with 'Destination host unreachable' (no route to other computer)

    - by Srekel
    I've got two computers in a LAN behind a wireless router. One has XP with ip 192.168.1.2 This one has W7 with ip 192.168.1.7 If I try to ping the other one from this computer, I get this: C:\Users\Srekel>ping 192.168.1.2 Pinging 192.168.1.2 with 32 bytes of data: Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Ping statistics for 192.168.1.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Tracert gives the same result: C:\Users\Srekel>tracert 192.168.1.2 Tracing route to 192.168.1.2 over a maximum of 30 hops 1 Kakburken4 [192.168.1.7] reports: Destination host unreachable. Trace complete. Although I can ping and tracert the router without any problems. I have disabled the firewalls on both computers. The router is set to use DHCP (if that matters). Here is the output from "route". C:\Users\Srekel>route print =========================================================================== Interface List 13...00 25 86 df c6 89 ......TP-LINK Wireless N Adapter 12...e0 cb 4e 26 b9 84 ......Realtek PCIe GBE Family Controller #2 11...e0 cb 4e 26 be 94 ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 14...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.7 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.7 276 192.168.1.7 255.255.255.255 On-link 192.168.1.7 276 192.168.1.255 255.255.255.255 On-link 192.168.1.7 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.7 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.7 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 14 58 ::/0 On-link 1 306 ::1/128 On-link 14 58 2001::/32 On-link 14 306 2001:0:5ef5:73ba:881:20c1:3f57:fef8/128 On-link 14 306 fe80::/64 On-link 14 306 fe80::881:20c1:3f57:fef8/128 On-link 1 306 ff00::/8 On-link 14 306 ff00::/8 On-link =========================================================================== Persistent Routes: None I've set up and debugged a few networks in my life but I'm not really an advanced network user, so I'm not sure what might be wrong. Any ideas? Oh, and pinging this computer from the other computer doesn't work either. EDIT: Adding arp output: C:\Users\Srekel>arp -a Interface: 192.168.1.7 --- 0xd Internet Address Physical Address Type 192.168.1.1 00-1f-33-ef-28-01 dynamic 192.168.1.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Adding ipconfig... C:\Users\Srekel>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : Kakburken4 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : TP-LINK Wireless N Adapter Physical Address. . . . . . . . . : 00-25-86-DF-C6-89 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 09 April 2010 23:09:45 Lease Expires . . . . . . . . . . : 10 April 2010 23:09:45 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 2: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller #2 Physical Address. . . . . . . . . : E0-CB-4E-26-B9-84 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : E0-CB-4E-26-BE-94 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Tunnel adapter isatap.{74D5C406-894E-4000-8DE7-6AAEBF7C8382}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2 Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv6 Address. . . . . . . . . . . : 2001:0:5ef5:73ba:881:20c1:3f57:fef8(Preferred) Link-local IPv6 Address . . . . . : fe80::881:20c1:3f57:fef8%14(Preferred) Default Gateway . . . . . . . . . : :: NetBIOS over Tcpip. . . . . . . . : Disabled

    Read the article

  • how to remove the link from the following javascript?

    - by murali
    hi i am unable to the remove the link from the keywords which are coming from database.... var googleurl="http://www.google.com/#hl=en&source=hp&q="; function displayResults(keyword, results_array) { // start building the HTML table containing the results var div = "<table>"; // if the searched for keyword is not in the cache then add it to the cache try { // if the array of results is empty display a message if(results_array.length == 0) { div += "<tr><td>No results found for <strong>" + keyword + "</strong></td></tr>"; // set the flag indicating that no results have been found // and reset the counter for results hasResults = false; suggestions = 0; } // display the results else { // resets the index of the currently selected suggestion position = -1; // resets the flag indicating whether the up or down key has been pressed isKeyUpDownPressed = false; /* sets the flag indicating that there are results for the searched for keyword */ hasResults = true; // loop through all the results and generate the HTML list of results for (var i=0; i<results_array.length-1; i++) { // retrieve the current function crtFunction = results_array[i]; // set the string link for the for the current function // to the name of the function crtFunctionLink = crtFunction; // replace the _ with - in the string link while(crtFunctionLink.indexOf("_") !=-1) crtFunctionLink = crtFunctionLink.replace("_","-"); // start building the HTML row that contains the link to the // help page of the current function div += "<tr id='tr" + i + "' onclick='location.href=document.getElementById(\"a" + i + "\").href;' onmouseover='handleOnMouseOver(this);' " + "onmouseout='handleOnMouseOut(this);'>" + "<td align='left'><a id='a" + i + "' href='" + googleurl + crtFunctionLink ; // check to see if the current function name length exceeds the maximum // number of characters that can be displayed for a function name if(crtFunction.length <= suggestionMaxLength) { div += "'>" + crtFunction.substring(0, httpRequestKeyword.length) + "" div += crtFunction.substring(httpRequestKeyword.length, crtFunction.length) + "</a></td></tr>"; } else { // check to see if the length of the current keyword exceeds // the maximum number of characters that can be displayed if(httpRequestKeyword.length < suggestionMaxLength) { div += "'>" + crtFunction.substring(0, httpRequestKeyword.length) + "" div += crtFunction.substring(httpRequestKeyword.length, suggestionMaxLength) + "</a></td></tr>"; } else { div += "'>" + crtFunction.substring(0,suggestionMaxLength) + "</td></tr>" } } } } // end building the HTML table div += "</table>"; var oSuggest = document.getElementById("suggest"); var oScroll = document.getElementById("scroll"); // scroll to the top of the list //oScroll.scrollTop = 1; -- murali commented // update the suggestions list and make it visible oSuggest.innerHTML =div; oScroll.style.visibility = "visible"; // if we had results we apply the type ahead for the current keyword if(results_array.length > 0) autocompleteKeyword(); } catch(e) { } } how to remove href tag from the following snippet... when i remove anything the drop down vanishes.... pls help me how to remove the href from the above snippet

    Read the article

  • Link Error : xxx is already defined in *****.LIB :: What exactly is wrong?

    - by claws
    Problem: I'm trying to use a library named DCMTK which used some other external libraries ( zlib, libtiff, libpng, libxml2, libiconv ). I've downloaded these external libraries (*.LIB & *.h files ) from the same website. Now, when I compile the DCMTK library I'm getting link errors (793 errors) like this: Error 2 error LNK2005: __encode_pointer already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 3 error LNK2005: __decode_pointer already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 4 error LNK2005: __CrtSetCheckCount already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 5 error LNK2005: __invoke_watson already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 6 error LNK2005: __errno already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 7 error LNK2005: __configthreadlocale already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 8 error LNK2005: _exit already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Documentation: This seems to be a popular error for this library so, they do have a FAQ entry addressing this issue which ( http://forum.dcmtk.org/viewtopic.php?t=35 ) says: The problem is that the linker tries to combine different, incompatible versions of the Visual C++ runtime library into a single binary. This happens when not all parts of your project and the libraries you link against are generated with the same code generation options in Visual C++. Do not use the /NODEFAULTLIB workaround, because strange software crashes may follow. Fix the problem! DCMTK is by default compiled with the "Multithreaded" or "Multithreaded Debug" code generation option (the latter for Debug mode). Either change the project settings of all of your code to use these code generation options, or change the code generation for all DCMTK modules and re-compile. MFC users beware: DCMTK should be compiled with "Multithreaded DLL" or "Multithreaded DLL Debug" settings if you want to link the libraries into an MFC application. Solution to same problem for others: http://stackoverflow.com/questions/2259697/vscopengl-huge-amount-of-linker-issues-with-release-build-only says: It seems that your release build is trying to link to something that was built debug. You probably have a broken dependency in your build, (or you missed rebuilding something to release by hand if your project is normally built in pieces). More technically, you seem to be linking projects built with different C Run Time library settings, one with "Multi-Threaded", another one with "Multi-Threaded Debug". Adjust the settings for all the projects to use the very same flavour of the library and the issue should go away Questions: Till now I used to think that Name mangling is the only problem that may cause linking failures if its not been standardized. Just now I knew there are other things also which can cause same effect. Whats up with the "Debug Mode" (Multi-Threaded Debug) and "Release Mode" (Multi-Threaded)? What exactly is happening under the hood? Why exactly this thing is causing linking error? I wonder if there is something called "Single-Threaded Debug" and "Single-Threaded" which again causes the same thing. Documentation talks something about "Code Generation Options". What Code Generation Options? WTH are they? Documentation specifically warns us not to use /NODEFAULTLIB workaround. (example /NODEFAULTLIB :msvcrt ). Why? How would I cause troubles? what exactly is it? Please explain the last point in the documentation for MFC users. Because I'm going to use MFC later in this project. Explain Why should we do it? What troubles would it cause if I don't. Anything more you'd like to mention? I mean regarding similar errors. I'm very interested in Linker & its problems. So, if there are any similar things you can mentions them or some keywords atleast.

    Read the article

  • In Javascript, a function starts a new scope, but we have to be careful that the function must be in

    - by Jian Lin
    In Javascript, I am sometimes too immerged in the idea that a function creates a new scope, that sometimes I even think the following anonymous function will create a new scope when it is being defined and assigned to onclick: <a href="#" id="link1">ha link 1</a> <a href="#" id="link2">ha link 2</a> <a href="#" id="link3">ha link 3</a> <a href="#" id="link4">ha link 4</a> <a href="#" id="link5">ha link 5</a> <script type="text/javascript"> for (i = 1; i <= 5; i++) { document.getElementById('link' + i).onclick = function() { var x = i; alert(x); return false; } } </script> but in fact, the anonymous function will create a new scope, that's right, but ONLY when it is being invoked, is that so? So the x inside the anonymous function is not created, no new scope is created. When the function was later invoked, there is a new scope alright, but the i is in the outside scope, and the x gets its value, and it is all 6 anyways. The following code will actually invoke a function and create a new scope and that's why the x is a new local variable x in the brand new scope each time, and the invocation of the function when the link is clicked on will use the different x in the different scopes. <a href="#" id="link1">ha link 1</a> <a href="#" id="link2">ha link 2</a> <a href="#" id="link3">ha link 3</a> <a href="#" id="link4">ha link 4</a> <a href="#" id="link5">ha link 5</a> <script type="text/javascript"> for (var i = 1; i <= 5; i++) { (function() { var x = i; document.getElementById('link' + i).onclick = function() { alert(x); return false; } })(); // invoking it now! } </script> If we take away the var in front of x, then it is a global x and so no local variable x is created in the new scope, and therefore, clicking on the links get all the same number, which is the value of the global x.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Connection drops while transferring large files to one server on a network

    - by Charlotte
    My company has two sites, each with their own LAN, using site to site VPN tunnel to connect the two sites. When transferring files (especially larger files) from site1 to site2 server1, the file transfer fails. I don't think this can be a VPN issue because transferring the same files to site2 server2 which is on the same network as server1 works fine. Pings to server1 and server2 at site2 from site1 are about the same, mostly 19/20ms with the odd one up to 50ms. As server1 is DB server with a high load I thought the NIC maybe overloaded, but a transfer from site2 server1 to site2 server2 works fine, and that uses the same NIC on server1 as transfers from site1 to site2 server1. The servers are both Windows Server 2003 VMs with VMXNET 3 NICs. Site2 Server1 route print: IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x10003 ...00 50 56 99 28 9b ...... vmxnet3 Ethernet Adapter #2 0x10004 ...00 50 56 99 18 97 ...... vmxnet3 Ethernet Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.20.10.1 172.20.10.18 10 10.10.10.0 255.255.255.0 10.10.10.70 10.10.10.70 10 10.10.10.70 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.10.10.70 10.10.10.70 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.20.10.0 255.255.255.0 172.20.10.18 172.20.10.18 10 172.20.10.18 255.255.255.255 127.0.0.1 127.0.0.1 10 172.20.255.255 255.255.255.255 172.20.10.18 172.20.10.18 10 224.0.0.0 240.0.0.0 10.10.10.70 10.10.10.70 10 224.0.0.0 240.0.0.0 172.20.10.18 172.20.10.18 10 255.255.255.255 255.255.255.255 10.10.10.70 10.10.10.70 1 255.255.255.255 255.255.255.255 172.20.10.18 172.20.10.18 1 Default Gateway: 172.20.10.1 =========================================================================== Persistent Routes: None Site2 Server2 route print IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x10003 ...00 50 56 99 15 00 ...... vmxnet3 Ethernet Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.20.10.1 172.20.10.114 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.20.10.0 255.255.255.0 172.20.10.114 172.20.10.114 10 172.20.10.114 255.255.255.255 127.0.0.1 127.0.0.1 10 172.20.255.255 255.255.255.255 172.20.10.114 172.20.10.114 10 224.0.0.0 240.0.0.0 172.20.10.114 172.20.10.114 10 255.255.255.255 255.255.255.255 172.20.10.114 172.20.10.114 1 Default Gateway: 172.20.10.1 =========================================================================== Persistent Routes: None Site1 Server route print: =========================================================================== Interface List 14...00 50 56 93 00 0b ......vmxnet3 Ethernet Adapter #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.168.1 192.168.168.118 261 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.168.0 255.255.255.0 On-link 192.168.168.118 261 192.168.168.118 255.255.255.255 On-link 192.168.168.118 261 192.168.168.255 255.255.255.255 On-link 192.168.168.118 261 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.168.118 261 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.168.118 261 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.168.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 261 fe80::/64 On-link 14 261 fe80::3c6b:996f:ef36:ee76/128 On-link 1 306 ff00::/8 On-link 14 261 ff00::/8 On-link =========================================================================== Persistent Routes: None tracert from site1 to site2 server1: Tracing route to server1 [172.20.10.18] over a maximum of 30 hops: 1 19 ms 19 ms 19 ms server1 [172.20.10.18] Trace complete. tracert from site2 server1 to site1: When this was run it went to the external IP of site2, then to a couple of external ips of the isp, then times out. Can anyone suggest any troubleshooting steps? Thanks, Charlotte.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • Value of links on negative review pages

    - by Sam Healey
    A general assumption with SEO is more links = higher rankings. What I would like to know is does Google know what those links are referring to. I.e. if somebody gives a product a good review on their personal blog and links the review to another companies website (who are selling the product), would Google take consideration for the review/description link. Essentially would Google know that this link refers to a product. So if somebody is looking to buy a product, Google would know to include this page because the previous link said it sells products rather than just having information on products. Then to take this further, does Google know if a link is positive or negative. For example, If somebody creates a post saying, do not visit example.com, example.com is bad because of blah blah blah. Would Google know that the link is getting bad feedback and therefore would it have a negative affect on rankings, or would Google go oh its just another link and give it better rankings?

    Read the article

  • guest blogging, recipricle links & nofollow

    - by sam
    When writing a guest blog for a site and in the blog post i write i link back to my self, that counts as an imbound link. Writing something on my blog, like "have a look at this post i wrote for _" made that a link the links would be recipricle, correct ? thus cancling each other out.. If i was to to make the link back to my article a nofollow link then would i still get the link juice ? If i write guest blog post and the site want to also write a guest blog on my site later on whats the best way to handle it as wont these both cancel each other out and have no effect ?

    Read the article

  • What's the best/most efficent way to create a semi-intelligent AI for a tic tac toe game?

    - by Link
    basically I am attempting to make a a efficient/smallish C game of Tic-Tac-Toe. I have implemented everything other then the AI for the computer so far. my squares are basically structs in an array with an assigned value based on the square. For example s[1].value = 1; therefore it's a x, and then a value of 3 would be a o. My question is whats the best way to create a semi-decent game playing AI for my tic-tac-toe game? I don't really want to use minimax, since It's not what I need. So how do I avoid a a lot of if statments and make it more efficient. Here is the rest of my code: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> struct state{ // defined int state; // 0 is tie, 1 is user loss, 2 is user win, 3 is ongoing game int moves; }; struct square{ // one square of the board int value; // 1 is x, 3 is o char sign; // no space used }; struct square s[9]; //set up the struct struct state gamestate = {0,0}; //nothing void setUpGame(){ // setup the game int i = 0; for(i = 0; i < 9; i++){ s[i].value = 0; s[i].sign = ' '; } gamestate.moves=0; printf("\nHi user! You're \"x\"! I'm \"o\"! Good Luck :)\n"); } void displayBoard(){// displays the game board printf("\n %c | %c | %c\n", s[6].sign, s[7].sign, s[8].sign); printf("-----------\n"); printf(" %c | %c | %c\n", s[3].sign, s[4].sign, s[5].sign); printf("-----------\n"); printf(" %c | %c | %c\n\n", s[0].sign, s[1].sign, s[2].sign); } void getHumanMove(){ // get move from human int i; while(1){ printf(">>:"); char line[255]; // input the move to play fgets(line, sizeof(line), stdin); while(sscanf(line, "%d", &i) != 1) { //1 match of defined specifier on input line printf("Sorry, that's not a valid move!\n"); fgets(line, sizeof(line), stdin); } if(s[i-1].value != 0){printf("Sorry, That moves already been taken!\n\n");continue;} break; } s[i-1].value = 1; s[i-1].sign = 'x'; gamestate.moves++; } int sum(int x, int y, int z){return(x*y*z);} void getCompMove(){ // get the move from the computer } void checkWinner(){ // check the winner int i; for(i = 6; i < 9; i++){ // check cols if((sum(s[i].value,s[i-3].value,s[i-6].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[i].value,s[i-3].value,s[i-6].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} } for(i = 0; i < 7; i+=3){ // check rows if((sum(s[i].value,s[i+1].value,s[i+2].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[i].value,s[i+1].value,s[i+2].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} } if((sum(s[0].value,s[4].value,s[8].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[0].value,s[4].value,s[8].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} if((sum(s[2].value,s[4].value,s[6].value)) == 8){printf("The Winner is o!\n");gamestate.state=1;} if((sum(s[2].value,s[4].value,s[6].value)) == 1){printf("The Winner is x!\n");gamestate.state=2;} } void playGame(){ // start playing the game gamestate.state = 3; //set-up the gamestate srand(time(NULL)); int temp = (rand()%2) + 1; if(temp == 2){ // if two comp goes first temp = (rand()%2) + 1; if(temp == 2){ s[4].value = 2; s[4].sign = 'o'; gamestate.moves++; }else{ s[2].value = 2; s[2].sign = 'o'; gamestate.moves++; } } displayBoard(); while(gamestate.state == 3){ if(gamestate.moves<10); getHumanMove(); if(gamestate.moves<10); getCompMove(); checkWinner(); if(gamestate.state == 3 && gamestate.moves==9){ printf("The game is a tie :p\n"); break; } displayBoard(); } } int main(int argc, const char *argv[]){ printf("Welcome to Tic Tac Toe\nby The Elite Noob\nEnter 1-9 To play a move, standard numpad\n1 is bottom-left, 9 is top-right\n"); while(1){ // while game is being played printf("\nPress 1 to play a new game, or any other number to exit;\n>>:"); char line[255]; // input whether or not to play the game fgets(line, sizeof(line), stdin); int choice; // user's choice about playing or not while(sscanf(line, "%d", &choice) != 1) { //1 match of defined specifier on input line printf("Sorry, that's not a valid option!\n"); fgets(line, sizeof(line), stdin); } if(choice == 1){ setUpGame(); // set's up the game playGame(); // Play a Game }else {break;} // exit the application } printf("\nThank's For playing!\nHave a good Day!\n"); return 0; }

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >