Search Results

Search found 9765 results on 391 pages for 'skill building'.

Page 56/391 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • How to avoid this PDO exception: Cannot execute queries while other unbuffered queries are active

    - by Vittorio Vittori
    Hi, I'd like to print a simple table in my page with 3 columns, building name, tags and architecture style. If I try to retrieve the list of building names and arch. styles there is no problem: SELECT buildings.name, arch_styles.style_name FROM buildings INNER JOIN buildings_arch_styles ON buildings.id = buildings_arch_styles.building_id INNER JOIN arch_styles ON arch_styles.id = buildings_arch_styles.arch_style_id LIMIT 0, 10 My problem starts on retreaving the first 5 tags for every building of the query I've just wrote. SELECT DISTINCT name FROM tags INNER JOIN buildings_tags ON buildings_tags.tag_id = tags.id AND buildings_tags.building_id = 123 LIMIT 0, 5 The query itself works perfectly, but not where I thought to use it: <?php // pdo connection allready active, i'm using mysql $pdo_conn->setAttribute(PDO::MYSQL_ATTR_USE_BUFFERED_QUERY, true); $sql = "SELECT buildings.name, buildings.id, arch_styles.style_name FROM buildings INNER JOIN buildings_arch_styles ON buildings.id = buildings_arch_styles.building_id INNER JOIN arch_styles ON arch_styles.id = buildings_arch_styles.arch_style_id LIMIT 0, 10"; $buildings_stmt = $pdo_conn->prepare ($sql); $buildings_stmt->execute (); $buildings = $buildings_stmt->fetchAll (PDO::FETCH_ASSOC); $sql = "SELECT DISTINCT name FROM tags INNER JOIN buildings_tags ON buildings_tags.tag_id = tags.id AND buildings_tags.building_id = :building_id LIMIT 0, 5"; $tags_stmt = $pdo_conn->prepare ($sql); $html = "<table>"; // i'll use it to print my table foreach ($buildings as $building) { $name = $building["name"]; $style = $building["style_name"]; $id = $building["id"]; $tags_stmt->bindParam (":building_id", $id, PDO::PARAM_INT); $tags_stmt->execute (); // the problem is HERE $tags = $tags_stmt->fetchAll (PDO::FETCH_ASSOC); $html .= "... $name ... $style"; foreach ($tags as $current_tag) { $tag = $current_tag["name"]; $html .= "... $tag ..."; // let's suppose this is an area of the table where I print the first 5 tags per building } } $html .= "...</table>"; print $html; I'm not experienced on queries, so i though something like this, but it throws the error: PHP Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active. Consider using PDOStatement::fetchAll(). Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute. What can I do to avoid this? Should I change all and search a different way to get this kind of queries?

    Read the article

  • Book recommendation for developing ecommerce website in Java

    - by Mirage
    I have seen that there are many books titled: Build Ecommerce website in php Build shopping carts in php or asp.net Is there any book which explains, from scratch, how to start building a website in Java using any framework or with servlets or JSP? Desired topics: Basic forms with logins and registration Building catalogue system Building shopping cart Building newsletters system

    Read the article

  • SQL Server Query

    - by Scott Jackson
    Hi, I'm trying to do some work with my SQL table. I have 2 buildings with room numbers 1 - 100 in building 1 and 101 - 199 in building 2. I have a location field (which I've just created) and want to run a query to populate it with either 'Building 1' or 'Building 2' depending on which room number it has in the 'Room' field. Many thanks for your help. Regards Scott

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • Any book for developing ecommerce website in Java

    - by Mirage
    I have seen that there many books titled 1)Build Ecommerce website in php 2)Build shopping cars in php or asp.net Is there any book which explains from scratch how to start building a website in java using any frame work or with servlets or jsp like 1)Basic form with logins and registration 2)building catalogue system 3)Building shopping cart 4)Building newletters system So i can strat reading it

    Read the article

  • Need Help With ASP.NET Custom Route

    - by Jason
    I need to create a custom route to list all the rooms in a given building. So, I want the url to look something like this: /Building/1000/Room Which would list all the rooms in Building 1000. Is this the correct mapping for the route (to call the IndexByBuilding method in RoomController)? routes.MapRoute( "RoomsByBuilding", "Building/{id}/Room", new { controller = "Room", action = "IndexByBuilding", id = "" } );

    Read the article

  • Django loading mysql data into template correctly

    - by user805981
    I'm new to django and I'm trying to get display a list of buildings and sort them alphabetically, then load it into an html document. Is there something that I am not doing correctly? below is models.py class Class(models.Model): building = models.CharField(max_length=20) class Meta: db_table = u'class' def __unicode__(self): return self.building below is views.py views.py def index(request): buildinglist = Class.objects.all().order_by('building') c = {'buildinglist': buildinglist} t = loader.get_template('index.html') return HttpResponse(t.render(c)) below is index.html index.html {% block content%} <h3>Buildings:</h3> <ul> {% for building in buildinglist %} <li> <a href='www.{% building %}.com'> # ex. www.searstower.com </li> {% endfor %} </ul> {% endblock %} Can you guys point me in the right direction? Thank you in advance guys! I appreciate your help very much.

    Read the article

  • Good coding style to do case-select in XSLT

    - by Scud
    I want to have a page display A,B,C,D depending on the return value from XML value (1,2,3,4). My approaches are by javascript or XSLT:choose. I want to know which way is better, and why? Can I do this case-select in .cs code (good or bad)? Should I javascript code in XSLT? Can the community please advise? Thanks. Below are the code. Javascript way (this one works): <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:js="urn:custom-javascript"> <xsl:template match="page"> <msxsl:script language="JavaScript" implements-prefix="js"> <![CDATA[ function translateSkillLevel(level) { switch (level) { case 0: return "Level 1"; case 1: return "Level 2"; case 2: return "Level 3"; } return "unknown"; } ]]> </msxsl:script> <div id="skill"> <table border="0" cellpadding="1" cellspacing="1"> <tr> <th>Level</th> </tr> <xsl:for-each select="/page/Skill"> <tr> <td> <!-- difference here --> <script type="text/javascript"> document.write(translateSkillLevel(<xsl:value-of select="@level"/>)); </script> </td> </tr> </xsl:for-each> </table> </div> </xsl:template> </xsl:stylesheet> Javascript way (this one doesn't work, getting undefined js tag): <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" xmlns:js="urn:custom-javascript"> <xsl:template match="page"> <msxsl:script language="JavaScript" implements-prefix="js"> <![CDATA[ function translateSkillLevel(level) { switch (level) { case 0: return "Level 1"; case 1: return "Level 2"; case 2: return "Level 3"; } return "unknown"; } ]]> </msxsl:script> <div id="skill"> <table border="0" cellpadding="1" cellspacing="1"> <tr> <th>Level</th> </tr> <xsl:for-each select="/page/Skill"> <tr> <td> <!-- difference here --> <xsl:value-of select="js:translateSkillLevel(string(@level))"/> </td> </tr> </xsl:for-each> </table> </div> </xsl:template> </xsl:stylesheet> XSLT way: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="page"> <div id="skill"> <table border="0" cellpadding="1" cellspacing="1"> <tr> <th>Level</th> </tr> <xsl:for-each select="/page/Skill"> <tr> <td> <xsl:choose> <xsl:when test="@level = 0"> Level 1 </xsl:when> <xsl:when test="@level = 1"> Level 2 </xsl:when> <xsl:when test="@level = 2"> Level 3 </xsl:when> <xsl:otherwise> unknown </xsl:otherwisexsl:otherwise> </xsl:choose> </td> </tr> </xsl:for-each> </table> </div> </xsl:template> </xsl:stylesheet> EDIT: Also, I have some inline javascript functions for form submit. <input type="submit" onclick="javascript:document.forms[0].submit();return false;"/>

    Read the article

  • Building an http packet in libnet(tcp packet), Please help us as soon as posible. we are stuck!

    - by Hila
    we are building a NAT program,we change each packet that comes from our internal subnet, change it's source IP address by libnet functions.( catch the packet with libpcap, put it sniff structures and build the new packet with libnet) over TCP, the syn/ack packets are good after the change, and when a HTTP-GET request is coming, we can see by wireshark that there is an error on the checksum field.. all the other fields are exactly the same as the original packet. Is anyone knows what can cause this problem? the new checksum in other packets is calculated as it should be.. but in the HTTP packet it doesn't..

    Read the article

  • Getting the text from a drop-down box

    - by Teifion
    This gets the value of whatever is selected in my dropdown menu. document.getElementById('newSkill').value I cannot however find out what property to go after for the text that's currently displayed by the drop down menu. I tried "text" then looked at W3Schools but that didn't have the answer, does anybody here know? For those not sure, here's the HTML for a drop down box. <select name="newSkill" id="newSkill" <option value="1"A skill</option <option value="2"Another skill</option <option value="3"Yet another skill</option </select Thanks

    Read the article

  • Spotlight on an office – Utrecht

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This time in our monthly topic, we have our spotlight on the brand new Oracle office in Utrecht, the Netherlands. About 35km south-east of Schiphol Airport and centrally located in the Netherlands, Oracle moved into the Facet building in March 2011. Facet is much more than an office building, it creates a work environment that relates to the ‘No Limits’ philosophy Oracle has in the Netherlands. “No Limits” means the building belongs to everyone. You choose the best place to work, based on the activities of that moment. To point this out, we currently have 1050 people working for Oracle Netherlands, and 623 workplaces. There is virtually no limit to where you can sit in our shiny new offices; we no longer have 'zoning', where departments own specific areas in the building, Even the Managing Director of Oracle Netherlands, does not have an office and he chooses a different working place every day. So make sure you are prepared when he is sitting next to you one day! If nobody has a fixed workplace, then you would think that finding a colleague could be tricky. Oracle uses CU (‘SeeYou’) which makes all of us easier to locate. Upon entering the building you receive a text stating where the greatest concentration of your buddies is sitting. Our internal messaging service also proves to be very valuable finding your colleagues. The heart of our building is the great RestOrant, with a very busy coffee bar. It offers an informal place for people to meet and is busy all day, not just at lunch time! The O-Bar in the atrium on the ground floor is also a very popular place to meet and drink tea or coffee and gives a breathtaking introduction to the office to any of our first time visitors. For a few minutes of relaxation during the working day, there are table tennis facilities and a Wii room on every floor! So if you are interested in joining Oracle in this Netherlands or anywhere else in EMEA, please have a look at http://campus.oracle.com for all of our latest vacancies and internships.

    Read the article

  • reinitializing javascript object's properties

    - by Pino
    In my Javascript drag and drop build app, a variety of buildings can be built. The specific characteristics of these are all saved in one object, like var buildings = { house: ['#07DA21',12,12,0,20], bank: ['#E7DFF2',16,16,0,3], stadium: ['#000000',12,12,0,1], townhall: ['#2082A8',20,8,0,1], etcetera } So every building has a number of characteristics, like color, size, look which can be called as buildings[townhall][0] (referring to the color). The object changes as the user changes things. When clicking 'reset' however, the whole object should be reset to its initial settings again to start over, but I have no idea how to do that. For normal objects it is something like. function building() {} var building = new building(); delete building; var building2 = new building(); You can easily delete and remake it, so the properties are reset. But my object is automatically initialized. Is there a way to turn my object into something that can be deleted and newly created, without making it very complicating, or a better way to store this information?

    Read the article

  • JSON to javaScript array

    - by saturn_research
    I'm having a problem handling JSON data within JavaScript, specifically in regards to using the data as an array and accessing and iterating through individual values. The JSON file is structured as follows: { "head": { "vars": [ "place" , "lat" , "long" , "page" ] } , "results": { "bindings": [ { "place": { "type": "literal" , "value": "Building A" } , "lat": { "datatype": "http://www.w3.org/2001/XMLSchema#float" , "type": "typed-literal" , "value": "10.3456" } , "long": { "datatype": "http://www.w3.org/2001/XMLSchema#float" , "type": "typed-literal" , "value": "-1.2345" } , "page": { "type": "uri" , "value": "http://www.example.com/a.html" } } , { "place": { "type": "literal" , "value": "Building B" } , "lat": { "datatype": "http://www.w3.org/2001/XMLSchema#float" , "type": "typed-literal" , "value": "11.3456" } , "long": { "datatype": "http://www.w3.org/2001/XMLSchema#float" , "type": "typed-literal" , "value": "-2.2345" } , "page": { "type": "uri" , "value": "http://www.example.com/b.html" } } , { "place": { "type": "literal" , "value": "Building C" } , "lat": { "datatype": "http://www.w3.org/2001/XMLSchema#float" , "type": "typed-literal" , "value": "12.3456" } , "long": { "datatype": "http://www.w3.org/2001/XMLSchema#float" , "type": "typed-literal" , "value": "-3.2345" } , "page": { "type": "uri" , "value": "http://www.example.com/c.html" } } ] } } I want to be able to convert this into a JavaScript array as follows in order that I can iterate through it and pull out the values for each location in order: var locations = [ ['Building A',10.3456,-1.2345,'http://www.example.com/a.html'], ['Building B',11.3456,-2.2345,'http://www.example.com/b.html'], ['Building C',12.3456,-3.2345,'http://www.example.com/c.html'] ]; Does anyone have any advice on how to achieve this? I have tried the following, but it picks up the "type" within the JSON, rather than just the value: $.each(JSONObject.results.bindings, function(i, object) { $.each(object, function(property, object) { $.each(object, function(property, value) { value; }); }); }); Any help, suggestions, advice or corrections would be greatly appreciated.

    Read the article

  • My Message to the Software Craftsmanship Group

    - by Liam McLennan
    This is a message I posted to the software craftsmanship group, looking for a week-long, pairing / skill sharing opportunity in the USA. I am a journeyman software craftsman, currenlty living and working in Brisbane Australia. In April I am going to travel to the US to attend Alt.Net Seattle and Seattle codecamp. In between the two conferences I have five days in which I would like to undertake a craftsmanship mini-apprenticeship, pairing and skill sharing with your company. I do not require any compensation other than the opportunity to assist you and learn from you. Although my conferences are in Seattle I am happy to travel anywhere in the USA and Canada (excluding Hawaii :) ). Things I am good at: .NET web development, javascript, creating software that solves problems Things I am learning: Ruby, Rails, javascript If you are interested in having me as visiting craftsman from the 12th to the 16th of April please reply on this mailing list or contact me directly. Liam McLennan Now I wait…

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Is using MultiMaps code smell? If so what alternative data structures fit my needs?

    - by Pureferret
    I'm trying to model nWoD characters for a roleplaying game in a character builder program. The crux is I want to support saving too and loading from yaml documents. One aspect of the character's is their set of skills. Skills are split between exactly three 'types': Mental, Physical, and Social. Each type has a list of skills under if. My Yaml looks like this: PHYSICAL: Athletics: 0 Brawl: 3 MENTAL: Academics: 2 Computers My initial thought was to use a Multimap of some sort, and have the skill type as an Enum, and key to my map. Each Skill is an element in the collection that backs the multimap. However, I've been struggling to get the yaml to work. On explaining this to a colleague outside of work they said this was probably a sign of code smell, and he's never seen it used 'well'. Are multiMaps really code smell? If so what alternate data structures would suit my goals?

    Read the article

  • genetic algorithm for leveling/build test

    - by Renan Malke Stigliani
    I'm starting o build a online PVP (duel like, one-to-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I never did anything like that, I'm still thinking about the maths behind the level/skill/special balances. So I thought good way of testing the best/combo builds would implement a Genetic Algorith. It'd be like that: Generate a big portion of random characters Make them fight, level them up accordingly to the victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try to make even best characters Add some more random chars, emulating new players Repeat the process for some time, or util find some chars who can beat everyone butts So I could play with the math and try to find the balance where the top x% chars would be a mix of various build types. So, is it a good idea, or there are some other easier method to do the balance? PS: I like this also, because it sounds funny

    Read the article

  • Tell me a Story

    - by Geoff N. Hiten
    I recently had a friend ask me to review his resume.  He is a very experienced DBA with excellent skills.  If I had an opening I would have hired him myself.  But not because of the resume.  I know his skill set and skill levels, but there is no way his standard resume can convey that.  A bare bones list of job titles and skills does not set you apart from your competition, nor does it convey whether you have junior or senior level skills and experience.  The solution is to not use the standard format. Tell me a story.  I want to know what you were responsible for.  Describe a tough project and how you saved time/money/personnel on that project.  Link your work activity to business value.  Drop some technical bits in there since we do work in a technical field, but show me what you can do to add value to my business well above what I would pay you.  That will get my attention. The resume exists for one primary and one secondary reason.  The primary reason is to get the interview.  A Resume won’t get you a job, so don’t expect it to.  The secondary reason is to give you and the interviewer a starting point for conversations.  If I can say “Tell me more about when….” and reference an item from your resume, then that is great for both of us.  Of course, you better be able to tell me more, both from the technical and the business side, at least if I am hiring a senior or higher level position.  As for the junior DBAs, go ahead and tell your story too.  Don’t worry about how simple or basic your projects or solutions seem.  It is how you solved the problem and what you learned that I am looking for.  If you learn rapidly and think like a DBA, I can work with that, regardless of you current skill level.

    Read the article

  • What should I do next in my life as a programmers? [closed]

    - by user1769787
    I am doing work in asp.net (mvc) in my starting days of programming 2 years ago.I have done work on some web-apps. I am not comfortable with c# but have working skill in jQuery and front-end development. from a year I do UI kind of work. Now someone can suggest me what should I do for next. Should I learn asp.net mvc or I should go for PHP then I can do some wordpress development. The problem is I never found small people use asp.net rather then PHP.( I am not currently employed). Someone can help me what should I do. I have front-end skill (not in programming) so what Is best for me to do.

    Read the article

  • Is programming as a profession in a race to the bottom?

    - by q303
    It seems to me that the programming industry is in a race to the bottom. If we take the practices of: Not taking time to implement best practices Using other's people code as much as possible (custom code as a liability) Using increasingly higher level languages to improve productivity GUI based development "tools" that greatly simplify "programming" and do not require people to understand the plumbing behind the code These things imply to me that we are in a race to becoming like any other office worker. It is in the employer's interest for things to not require skill (easier to replace), for things to be prebuilt (less project time). My point here is that a) is there a misalignment between skill and the economic interests of the employer? and b) if there is, how do you mitigate it to enforce professional standards?

    Read the article

  • [Dear Recruiter] I'm an engineer trapped in a kittens body.

    - by refuctored
    Aditya -- I am very interested in pursuing the opportunity you've presented to me.  Let me assure you, there are very few individuals in Indianapolis with the skill set which I have so passionately trained to acquire.   Accompanying my skill set I do have a few quirks that you'll need to be okay with prior to placing me at a company. Bluntly, I feel like I'm a software engineer trapped in a cute little kitten's body.  I find that I am most comfortable going to work with a few stripes and whiskers painted on my face.  Coworkers will need to be okay with me grooming myself and making kitten noises whilst I do so.  I do occasionally let out a purr now and then, but not loud enough to disrupt anyone.  I always throw my arm-hair-balls in the appropriate trash receptacle. Will your company provide a scratching post or will I need to bring my own?  I can bring my own litter box. Meow-muh,George

    Read the article

  • How to do the transition from project manager to product manager? [on hold]

    - by E. Topp
    I'm working as project manager / head of software for a small software company and was working on my own previously to this position. I want to however make the transition to product manager from my current position. You could ask about position differences, pitfalls of using project management processes and decision making as a product manager. What skill sets you need for the product manager job What are the position differences? What are the pitfalls of using project management processes and decision making as a product manager? What skill set is required for the product manager job? Is the transition easier for a project manager?

    Read the article

  • How to Install Linux on my PC

    - by Holic
    Hi i need some help to install the drivers from my pc, on Ubuntu 10.10 i just installed it, and i a newbie on Ubuntu, but i understand a bit of Windows...but i want to try ubuntu and then Maybe change to UBUNTU!!! My hardware: QuadCore Intel Core i7-870, 3266 MHz (24 x 136) Asus P7P55D-E (2 PCI, 3 PCI-E x1, 2 PCI-E x16, 4 DDR3 DIMM, Audio, Gigabit LAN, IEEE-1394) NVIDIA GeForce GTX 480 (1536 MB) nVIDIA HDMI @ nVIDIA GF100 - High Definition Audio Controller VIA VT1828S @ Intel Ibex Peak PCH - High Definition Audio Controller [B-3] DIMM1: G Skill F3-12800CL9-2GBRL 2 GB DDR3-1333 DDR3 SDRAM (8-8-8-22 @ 609 MHz) (7-7-7-20 @ 533 MHz) (6-6-6-17 @ 457 MHz) DIMM3: G Skill F3-12800CL9-2GBRL 2 GB DDR3-1333 DDR3 SDRAM (8-8-8-22 @ 609 MHz) (7-7-7-20 @ 533 MHz) (6-6-6-17 @ 457 MHz) my pc is not connected to the internet with a wire(RJ45) but with a wireless LAn Asus WL-167G-V3(wich i also whant to install if possible) Anything would've help me :) Cheers & Thank you!

    Read the article

  • Is it viable to become a contract programmer straight out of college?

    - by M G
    I have a Bachelor of Science in Computer Science and four months research experience designing and implementing a research project. I realize this is highly dependent on my skill set - which includes C, C++, Java, Python, and SQL. I feel I have an advantage in two ways: I am young and am not afraid to work overtime. I am willing to take lower pay to gather a client base/experience, and work nights/weekends to get a few projects under my belt. This may be cliche, but I feel that I can learn new technologies quicker than most. At the very least, I am not a slow study. With this being said, is it viable for me to become a contract programmer? Or do I need the 10+ year skill set that most contractors bring to the table?

    Read the article

  • How much does college (e.g. a compsci major) factor into a programmer's resume? [closed]

    - by Brandon
    I was having an argument with a friend who claims that given roughly equal skill, someone with a college degree from a name school is going to start at a significantly better job (e.g. a higher-end company) for his first job; and because of this, he's also going to be significantly ahead for his second job. Here are my two questions: given equal skill, how much does college factor into a programmer's overall career? if someone has the technical skills to work competently as as programmer, is it worth it for him to go to college first? if the degree is significant, is it significant whether the degree is from an average college or a higher-tier college (e.g. Stanford, MIT)?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >