Search Results

Search found 21772 results on 871 pages for 'direct link'.

Page 91/871 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • How to link processing power of old computers together?

    - by redIago
    Hey all, I'm sitting on 8 old computers of varied sorts that are more or less useless at this point for any other purpose really. Is there a way I could link their hardware or processing power or whatever together over wifi and use one as like a central computer? Like it would be cool to distribute the processing of some video game or encryption generating program over the collective computers. Any way to do all this? Thanks.

    Read the article

  • How can I set my TP-Link TL-WR1043ND to extend my router - modem range?

    - by Pitto
    I'd like to extend my WiFi coverage, so I've bought the TP-Link TL-WR1043ND and updated its firmware to the latest (wr1043nv1_en_3_13_4_up(110429)) but I can't find how to use its WDS function. Reading further on Super User I understand that both the modem-router (Pirelli Alice Gate) and the TL-WR1043ND should support WDS. Are there any tricks to achieve the same result - extending my WiFi range - even changing the firmware to DD-WRT or Tomato etc?

    Read the article

  • Is ASP.net MVC a direct copy of Ruby on Rails concepts?

    - by Greg
    Hi, I'm been developing Ruby on Rails previously. I'm now looking at an ASP.net web app and I'm looking at WebForms and MVC. As I look at MVC it feels as if I'm looking at the result of something a Ruby on Rails developer implemented after being forced to work in MS land. So I'm wondering: Was MVC more or less taken directly from Ruby on Rails and it's concepts? (either intentionally or unintentionally)

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • databind a DropDownList control with a list of all sub directories that exist in a particular direct

    - by sushant
    I am wanting to databind a DropDownList control with a list of all sub directories that exist in a particular directory on the server. The directory I want to search is in the root of the application. I am fairly new to programming and I'm not sure where to even start. I found this code on a website: Dim root As String = "C;\" Dim folders() As String = Directory.GetDirectories(root) Dim sb As New StringBuilder(2048) Dim f As String For Each f In folders Dim foldername As String = Path.GetFileName(f) sb.Append("<option>") sb.Append(foldername) sb.Append("</option>") Next Label3.Text = "<select runat=""sever"" id=""folderlist""" & sb.ToString() & "</select>" I guess this is vb. but my tool is in asp, so is their something similar in vbscript so that I can use it.

    Read the article

  • Is it a good practice to perform direct database access in the code-behind of an ASP.NET page?

    - by patricks418
    Hi, I am an experienced developer but I am new to web application development. Now I am in charge of developing a new web application and I could really use some input from experienced web developers out there. I'd like to understand exactly what experienced web developers do in the code-behind pages. At first I thought it was best to have a rule that all the database access and business logic should be performed in classes external to the code-behind pages. My thought was that only logic necessary for the web form would be performed in the code-behind. I still think that all the business logic should be performed in other classes but I'm beginning to think it would be alright if the code-behind had access to the database to query it directly rather than having to call other classes to receive a dataset or collection back. Any input would be appreciated.

    Read the article

  • Should we be giving the client's management team direct access to our git hub repository so that the

    - by SharePoint Newbie
    Hi, We are presently working for a client who is new to working with distributed teams. We have teams spread across India and the UK. Although we have decent project tracking tools (Mingle), would it be a good idea to the give the PM at the client access to our git hub repo. Would this be make it easier for them (see what the devs are working on and an insight into what the team has been developing). I agree that noot all commit messages would make sense to them but would this be a good way to boost their confidence in what we are doing? They already can check out our fortnightly releases on our QA and UA environments, but this still is behind dev by 5-6 days. Also, is there any reporting for git hub which makes it easier for PM types to make sense of it all? Thanks

    Read the article

  • Add Source file link to the default ASP.NET Server Error page?

    - by Max Schilling
    Has anyone ever thought to attempt to modify the default ASP.NET Server error page to provide a link BACK to the error source in Visual Studio? Consider the following standard error page in ASP.NET: Server Error in '/myproject' Application. Invalid object name 'usp_DoSomething'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Invalid object name 'usp_DoSomething'. Source Error: Line 4323: cmd.CommandText = "usp_DoSomething"; Line 4324: Line 4325: using (var dr = cmd.ExecuteReader()) Line 4326: { Line 4327: if (dr != null) Source File: c:\development\myproject\myproject.components\providers\sql\sqldataprovider.cs Line: 4325 When an error like this is generated, the HTML has the source back to the file the error occurs in and the line number. Has anyone ever written or thought of writing some mechanism to turn the text into a link back to the error in Visual Studio? I've never seen anything that does it, but it just seems like it would be a helluva nice feature and I think about it in the back of my mind every time an error occurs when I have to manually go find it in the source. It would just be nice to be able to click a link to take me straight there. Anyone written any, or know of any solutions for this. I use Chrome or Firefox as my browsers of choice, but I'd even consider using IE again if someone found a plugin that did this. Thanks, Max

    Read the article

  • Do double forward slashes direct IE to use specific css?

    - by kjh
    I have just found something very weird while developing a website. While trying to get a div element to display across the top of the screen, I noticed that I wasn't achieving a desired result in any browser except for old versions of IE. In order to test some different code, instead of deleting the faulty line, I used '//' to comment it out (I'm not really even sure if that works in css) but what happened was, the compatible browsers used the uncommented code, while IE used the code marked by '//'. here is the code: #ban-menu-div{ position:fixed;top:0; //position:relative; //<-- IE keeps the banner with rel pos while the other display:block; // browsers used fixed margin:auto; padding:0px; width:100%; text-align:center; background:black; } so basically, it seems as though // can be used to instruct newer browsers to ignore specific lines of code, and instruct older versions of IE to use it? If this is common practice someone please let me know. it sure makes developing for older browsers a hell of a lot easier

    Read the article

  • Troubleshooting multiple GET variables In PHP

    - by V413HAV
    This may be a very simple question but I don't what's the wrong thing am doing here... To explain you clearly, I've set a real simple example below: <ul> <li><a href="test.php?link1=true">Link 1</a></li> <li><a href="test.php?link2=true">Link 2</a></li> <li><a href="test.php?link3=true">Link 3</a></li> </ul> <?php if(isset($_GET['link1'])) { if(($_GET['link1']) == 'true') { echo 'This Is Link 1'; } } if(isset($_GET['link2'])) { if(($_GET['link2']) == 'true') { echo 'This Is Link 2'; } } if(isset($_GET['link3'])) { if(($_GET['link3']) == 'true') { echo 'This Is Link 3'; } } ?> This is a test.php page, here I've set 3 different arguments for $_GET, and show contents accordingly, now everything works perfect, the only thing am not understanding is how to block this kind of url say if user clicks on link 1 the url will be : http://localhost/test.php?link1=true And the Output of this url is This is Link 1 Now if I change this url to : http://localhost/test.php?link3=true&link2=true&link1=true And the Output what I get is This Is Link 1This Is Link 2This Is Link 3 Now this is ok here, but it's very annoying if someone types this and see's forms one below the other, any way I can stop this tampering?

    Read the article

  • How do I load a Direct X .x 3D model in iPhone SDK?

    - by Alex
    I have been searching the internet for the last few days trying to figure this out. My goal is to draw a textured and animated .x file exported from a 3D program. I found a tutorial of how to load and draw a .obj file, which I understand, but the tutorial doesn't say how to texture it, and .obj doesn't support animation. The .x file structure is human readable just like .obj, but I have no clue how to texture it, and I might be able to figure out how to animate it, but I would prefer to be instructed on that. Any help would be GREATLY appreciated.

    Read the article

  • str_replace between two numerical strings

    - by user330840
    Another problem with str_replace, I would like to change the following $title data into URL by taking the $string between number in the beginning and after dash (-) Chicago's Public Schools - $10.3M New Jersey - $3M Michigan: Public Health - $1M The desire output is: chicago-public-school new-jersey michigan-public-health PHP code I am using $title = ucwords(strtolower(strip_tags(str_replace("1: ","",$title)))); $x=1; while($x <= 10) { $title = ucwords(strtolower(strip_tags(str_replace("$x: ","",$title)))); $x++; } $link = preg_replace('/[<>()!#?:.$%\^&=+~`*&#233;"\']/', '',$title); $money = str_replace(" ","-",$link); $link = explode(" - ",$link); $link = preg_replace(" (\(.*?\))", "", $link[0]); $amount = preg_replace(" (\(.*?\))", "", $link[1]); $code_entities_match = array( '&#39;s' ,'&quot;' ,'!' ,'@' ,'#' ,'$' ,'%' ,'^' ,'&' ,'*' ,'(' ,')' ,'+' ,'{' ,'}' ,'|' ,':' ,'"' ,'<' ,'>' ,'?' ,'[' ,']' ,'' ,';' ,"'" ,',' ,'.' ,'_' ,'/' ,'*' ,'+' ,'~' ,'`' ,'=' ,' ' ,'---' ,'--','--'); $code_entities_replace = array('' ,'-' ,'-' ,'' ,'' ,'' ,'-' ,'-' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'-' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'' ,'-' ,'' ,'-' ,'-' ,'' ,'' ,'' ,'' ,'' ,'-' ,'-' ,'-','-'); $link = str_replace($code_entities_match, $code_entities_replace, $link); $link = strtolower($link); Unfortunately the result I got: -chicagoamp9s-public-school 2-new-jersey 3-michigan-public-health Anyone has a better solution for this? Thanks guys! (the &#39; changed into amp9 - wonder why?)

    Read the article

  • Why would gnu ld link order causes Signal 11 (SEGV) on startup?

    - by Benoit
    We are building a large application in C++ that includes the use of many (static) libraries. We have a problem where the application crashes on startup with a Signal 11, before we even reach main. After much debugging, we have observed that if we explicitly reference an object file so its link order is early, the program crashes on startup. If the file is referenced later (or not referenced at all), the program does not crash. To be clear, there is NO code invoked directly from this object file. However, as it is C++, there might be static objects that do get constructed (it's a CORBA IDL generated file). We use the -Wl,--start-group ... --end-group arguments to multi-pass link the symbols since the libraries are interdependent. Here is a representation of what I mean. This is what the linker's object file order is: Order 1 Order 2 Order 3 foo.o foo.o foo.o ... ... ... main.o main.o main.o crasher.o libA.o libA.o libA.o LibB.o LibB.o LibB.o LibC.o LibC.o LibC.o crasher.o Results: NO CRASH NO CRASH CRASH Does any one have an idea why the linkage order has an effect on the crash? It would be nice if we could force the crasher.o to link later, but we're really after an explanation. Also, is there a way to force the linker to place crasher.o towards the end? Just to add to the fun, in actuality, crasher.o is part of a Library in the --start/--end-group.

    Read the article

  • Mod_rewrite trouble: Want to direct from ?= to a flat link, nothing seems to work.

    - by Davezatch
    I have a site that currently serves results as example.com/index.php?show=foo and I'd like it to read example.com/show/foo. My understanding is this would make them visible to search engine robots, and it seems a much simpler way to do this than to create a couple hundred html files... I've tried the following .htaccess code: Options +FollowSymLinks RewriteEngine on RewriteRule ^show/(.*)$ index.php?show=$1 [NC,L] No dice. Also tried this, which I found on another stack overflow question: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([0-9A-Za-z]+)/?$ /index.php?show=$1 [L] </IfModule> Any ideas on what I'm missing here?

    Read the article

  • In Maven 2, Is it possible to specify a mirror for everything, but allow for failover to direct repo

    - by Justin Searls
    I understand that part of the appeal of setting up a Maven mirror, such as the following: <mirror> <id>nexus</id> <name>Maven Repository</name> <mirrorOf>*</mirrorOf> <url>http://server:8081/nexus/content/groups/public</url> </mirror> ... is that the documentation states, "You can force Maven to use a single repository by having it mirror all repository requests." However, is this also an indication that by having a * mirror set up each workstation [b]must[/b] be forced to go through the mirror? I ask because I would like each workstation to failover and connect directly to whatever public repositories it knows about in the event that Nexus can't resolve a dependency or plugin. (In a perfect world, each developer has the access necessary to add additional proxy repositories as needed. However, sometimes that access isn't available; sometimes the Nexus server goes down; sometimes it suffers a Java heap error.) Is this "mirror but go ahead and connect directly to public repos" failover configuration possible in Maven 2? Will it be in Maven 3?

    Read the article

  • Ping "replies" from same computer with 'Destination host unreachable' (no route to other computer)

    - by Srekel
    I've got two computers in a LAN behind a wireless router. One has XP with ip 192.168.1.2 This one has W7 with ip 192.168.1.7 If I try to ping the other one from this computer, I get this: C:\Users\Srekel>ping 192.168.1.2 Pinging 192.168.1.2 with 32 bytes of data: Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Reply from 192.168.1.7: Destination host unreachable. Ping statistics for 192.168.1.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Tracert gives the same result: C:\Users\Srekel>tracert 192.168.1.2 Tracing route to 192.168.1.2 over a maximum of 30 hops 1 Kakburken4 [192.168.1.7] reports: Destination host unreachable. Trace complete. Although I can ping and tracert the router without any problems. I have disabled the firewalls on both computers. The router is set to use DHCP (if that matters). Here is the output from "route". C:\Users\Srekel>route print =========================================================================== Interface List 13...00 25 86 df c6 89 ......TP-LINK Wireless N Adapter 12...e0 cb 4e 26 b9 84 ......Realtek PCIe GBE Family Controller #2 11...e0 cb 4e 26 be 94 ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 14...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.7 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.7 276 192.168.1.7 255.255.255.255 On-link 192.168.1.7 276 192.168.1.255 255.255.255.255 On-link 192.168.1.7 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.7 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.7 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 14 58 ::/0 On-link 1 306 ::1/128 On-link 14 58 2001::/32 On-link 14 306 2001:0:5ef5:73ba:881:20c1:3f57:fef8/128 On-link 14 306 fe80::/64 On-link 14 306 fe80::881:20c1:3f57:fef8/128 On-link 1 306 ff00::/8 On-link 14 306 ff00::/8 On-link =========================================================================== Persistent Routes: None I've set up and debugged a few networks in my life but I'm not really an advanced network user, so I'm not sure what might be wrong. Any ideas? Oh, and pinging this computer from the other computer doesn't work either. EDIT: Adding arp output: C:\Users\Srekel>arp -a Interface: 192.168.1.7 --- 0xd Internet Address Physical Address Type 192.168.1.1 00-1f-33-ef-28-01 dynamic 192.168.1.255 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 239.255.255.250 01-00-5e-7f-ff-fa static 255.255.255.255 ff-ff-ff-ff-ff-ff static Adding ipconfig... C:\Users\Srekel>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : Kakburken4 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : TP-LINK Wireless N Adapter Physical Address. . . . . . . . . : 00-25-86-DF-C6-89 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 192.168.1.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 09 April 2010 23:09:45 Lease Expires . . . . . . . . . . : 10 April 2010 23:09:45 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Local Area Connection 2: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller #2 Physical Address. . . . . . . . . : E0-CB-4E-26-B9-84 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Ethernet adapter Local Area Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : E0-CB-4E-26-BE-94 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Tunnel adapter isatap.{74D5C406-894E-4000-8DE7-6AAEBF7C8382}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft ISATAP Adapter #2 Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Tunnel adapter Teredo Tunneling Pseudo-Interface: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Physical Address. . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv6 Address. . . . . . . . . . . : 2001:0:5ef5:73ba:881:20c1:3f57:fef8(Preferred) Link-local IPv6 Address . . . . . : fe80::881:20c1:3f57:fef8%14(Preferred) Default Gateway . . . . . . . . . : :: NetBIOS over Tcpip. . . . . . . . : Disabled

    Read the article

  • how to remove the link from the following javascript?

    - by murali
    hi i am unable to the remove the link from the keywords which are coming from database.... var googleurl="http://www.google.com/#hl=en&source=hp&q="; function displayResults(keyword, results_array) { // start building the HTML table containing the results var div = "<table>"; // if the searched for keyword is not in the cache then add it to the cache try { // if the array of results is empty display a message if(results_array.length == 0) { div += "<tr><td>No results found for <strong>" + keyword + "</strong></td></tr>"; // set the flag indicating that no results have been found // and reset the counter for results hasResults = false; suggestions = 0; } // display the results else { // resets the index of the currently selected suggestion position = -1; // resets the flag indicating whether the up or down key has been pressed isKeyUpDownPressed = false; /* sets the flag indicating that there are results for the searched for keyword */ hasResults = true; // loop through all the results and generate the HTML list of results for (var i=0; i<results_array.length-1; i++) { // retrieve the current function crtFunction = results_array[i]; // set the string link for the for the current function // to the name of the function crtFunctionLink = crtFunction; // replace the _ with - in the string link while(crtFunctionLink.indexOf("_") !=-1) crtFunctionLink = crtFunctionLink.replace("_","-"); // start building the HTML row that contains the link to the // help page of the current function div += "<tr id='tr" + i + "' onclick='location.href=document.getElementById(\"a" + i + "\").href;' onmouseover='handleOnMouseOver(this);' " + "onmouseout='handleOnMouseOut(this);'>" + "<td align='left'><a id='a" + i + "' href='" + googleurl + crtFunctionLink ; // check to see if the current function name length exceeds the maximum // number of characters that can be displayed for a function name if(crtFunction.length <= suggestionMaxLength) { div += "'>" + crtFunction.substring(0, httpRequestKeyword.length) + "" div += crtFunction.substring(httpRequestKeyword.length, crtFunction.length) + "</a></td></tr>"; } else { // check to see if the length of the current keyword exceeds // the maximum number of characters that can be displayed if(httpRequestKeyword.length < suggestionMaxLength) { div += "'>" + crtFunction.substring(0, httpRequestKeyword.length) + "" div += crtFunction.substring(httpRequestKeyword.length, suggestionMaxLength) + "</a></td></tr>"; } else { div += "'>" + crtFunction.substring(0,suggestionMaxLength) + "</td></tr>" } } } } // end building the HTML table div += "</table>"; var oSuggest = document.getElementById("suggest"); var oScroll = document.getElementById("scroll"); // scroll to the top of the list //oScroll.scrollTop = 1; -- murali commented // update the suggestions list and make it visible oSuggest.innerHTML =div; oScroll.style.visibility = "visible"; // if we had results we apply the type ahead for the current keyword if(results_array.length > 0) autocompleteKeyword(); } catch(e) { } } how to remove href tag from the following snippet... when i remove anything the drop down vanishes.... pls help me how to remove the href from the above snippet

    Read the article

  • Link Error : xxx is already defined in *****.LIB :: What exactly is wrong?

    - by claws
    Problem: I'm trying to use a library named DCMTK which used some other external libraries ( zlib, libtiff, libpng, libxml2, libiconv ). I've downloaded these external libraries (*.LIB & *.h files ) from the same website. Now, when I compile the DCMTK library I'm getting link errors (793 errors) like this: Error 2 error LNK2005: __encode_pointer already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 3 error LNK2005: __decode_pointer already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 4 error LNK2005: __CrtSetCheckCount already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 5 error LNK2005: __invoke_watson already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 6 error LNK2005: __errno already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 7 error LNK2005: __configthreadlocale already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Error 8 error LNK2005: _exit already defined in MSVCRTD.lib(MSVCR90D.dll) LIBCMTD.lib dcmmkdir Documentation: This seems to be a popular error for this library so, they do have a FAQ entry addressing this issue which ( http://forum.dcmtk.org/viewtopic.php?t=35 ) says: The problem is that the linker tries to combine different, incompatible versions of the Visual C++ runtime library into a single binary. This happens when not all parts of your project and the libraries you link against are generated with the same code generation options in Visual C++. Do not use the /NODEFAULTLIB workaround, because strange software crashes may follow. Fix the problem! DCMTK is by default compiled with the "Multithreaded" or "Multithreaded Debug" code generation option (the latter for Debug mode). Either change the project settings of all of your code to use these code generation options, or change the code generation for all DCMTK modules and re-compile. MFC users beware: DCMTK should be compiled with "Multithreaded DLL" or "Multithreaded DLL Debug" settings if you want to link the libraries into an MFC application. Solution to same problem for others: http://stackoverflow.com/questions/2259697/vscopengl-huge-amount-of-linker-issues-with-release-build-only says: It seems that your release build is trying to link to something that was built debug. You probably have a broken dependency in your build, (or you missed rebuilding something to release by hand if your project is normally built in pieces). More technically, you seem to be linking projects built with different C Run Time library settings, one with "Multi-Threaded", another one with "Multi-Threaded Debug". Adjust the settings for all the projects to use the very same flavour of the library and the issue should go away Questions: Till now I used to think that Name mangling is the only problem that may cause linking failures if its not been standardized. Just now I knew there are other things also which can cause same effect. Whats up with the "Debug Mode" (Multi-Threaded Debug) and "Release Mode" (Multi-Threaded)? What exactly is happening under the hood? Why exactly this thing is causing linking error? I wonder if there is something called "Single-Threaded Debug" and "Single-Threaded" which again causes the same thing. Documentation talks something about "Code Generation Options". What Code Generation Options? WTH are they? Documentation specifically warns us not to use /NODEFAULTLIB workaround. (example /NODEFAULTLIB :msvcrt ). Why? How would I cause troubles? what exactly is it? Please explain the last point in the documentation for MFC users. Because I'm going to use MFC later in this project. Explain Why should we do it? What troubles would it cause if I don't. Anything more you'd like to mention? I mean regarding similar errors. I'm very interested in Linker & its problems. So, if there are any similar things you can mentions them or some keywords atleast.

    Read the article

  • In Javascript, a function starts a new scope, but we have to be careful that the function must be in

    - by Jian Lin
    In Javascript, I am sometimes too immerged in the idea that a function creates a new scope, that sometimes I even think the following anonymous function will create a new scope when it is being defined and assigned to onclick: <a href="#" id="link1">ha link 1</a> <a href="#" id="link2">ha link 2</a> <a href="#" id="link3">ha link 3</a> <a href="#" id="link4">ha link 4</a> <a href="#" id="link5">ha link 5</a> <script type="text/javascript"> for (i = 1; i <= 5; i++) { document.getElementById('link' + i).onclick = function() { var x = i; alert(x); return false; } } </script> but in fact, the anonymous function will create a new scope, that's right, but ONLY when it is being invoked, is that so? So the x inside the anonymous function is not created, no new scope is created. When the function was later invoked, there is a new scope alright, but the i is in the outside scope, and the x gets its value, and it is all 6 anyways. The following code will actually invoke a function and create a new scope and that's why the x is a new local variable x in the brand new scope each time, and the invocation of the function when the link is clicked on will use the different x in the different scopes. <a href="#" id="link1">ha link 1</a> <a href="#" id="link2">ha link 2</a> <a href="#" id="link3">ha link 3</a> <a href="#" id="link4">ha link 4</a> <a href="#" id="link5">ha link 5</a> <script type="text/javascript"> for (var i = 1; i <= 5; i++) { (function() { var x = i; document.getElementById('link' + i).onclick = function() { alert(x); return false; } })(); // invoking it now! } </script> If we take away the var in front of x, then it is a global x and so no local variable x is created in the new scope, and therefore, clicking on the links get all the same number, which is the value of the global x.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >