Search Results

Search found 9413 results on 377 pages for 'built in'.

Page 12/377 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • how to fix "BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands?"

    - by Joseph
    So I was using Ubuntu when suddenly the whole thing froze up and I had to reboot. And from that moment on, the system when it is starting up, prompts this little selection menu: GNU GRUB version 1.99~rc1-13ubuntu3 Ubuntu, with Linux 2.6.38-10-generic ubuntu, with Linux 2.6.38-10-generic (recovery mode) Previous Linux versions Memory test (memtest86+) Memory test (memtest86+, serial console 115200) I have chosen all of the available choices but all I get is another command line system that reads: BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs): And honestly I can't do anything with it. Does anyone have any idea of what is going on and how I can get Ubuntu to work again?

    Read the article

  • How Easy is it to Code In-Built Videos?

    - by Alan Parker
    First time poster so please don't bite my head off. Basically, I'm having a site built for me and I don't really know anything about coding but I'm not too sure if I trust my web developer. I asked him recently about adding a feature where I could display built-in videos like the following page - http://www.ejot.co.uk/buildingfasteners.odl and he quoted me quite a high amount for it. I just wanted to double check with you guys whether this is a difficult feature to add in and whether it justifies a reasonable amount of money on top of what I'm already paying him. Thanks in advance for your help, Alan

    Read the article

  • Why can we delete some built-in properties of global object?

    - by demix
    I'm reading es5 these days and find that [[configurable]] attribute in some built-in properties of global object is set to true which means we can delete these properties. For example: the join method of Array.prototype object have attributes {[[Writable]]:true, [[Enumerable]]: false, [[Configurable]]: true} So we can easily delete the join method for Array like: delete Array.prototype.join; alert([1,2,3].join); The alert will display undefined in my chromium 17,firefox 9 ,ie 10,even ie6; In Chrome 15 & safari 5.1.1 the [[configurable]] attribute is set to true and delete result is also true but the final result is still function(){[native code]}. Seems like this is a bug and chromium fix it. I haven't notice that before. In my opinion, delete built-in functions in user's code is dangerous, and will bring out so many bugs when working with others.So why ECMAScript make this decision?

    Read the article

  • Is it possible to generate Events and Hooks in Lua for any game without built-in support?

    - by pr0tocol
    Does a game have to have built-in functions to accept and run lua scripts, or can I design Events and Hooks using Lua on any game I please, akin to the days where C code could be used to hook into the WinAPI using dlls? The reason I ask is, I am trying to create a background application that will perform events and hooks on a particular game that does not currently support lua in-game. Brief examples: Events: - An action executed by the PLAYER is detected. For instance, hitting the Q key will normally make my character use an ability, but with my Lua script running in the background, will cause a sound to play on my computer (or something). Hooks: - An action within the GAME is detected. For instance, the game spawns an enemy every minute. When an enemy spawns, the script will detect this and perform an action, for instance playing a sound locally on the computer. I would like to do both, but I know for games like Garry's Mod, the game already has built-in support for running lua scripts. Is there a way to do either events OR hooks using lua similarly to how C/C++ can connect to a game using WinAPI dlls?

    Read the article

  • How to redistribute binary programs built on modern Ubuntu so that they can be executed on any older Linux system ?

    - by psihodelia
    I found that if I build any binary on Ubuntu 10.10, then it doesn't execute on some older Linuxes. It is because Ubuntu uses a very new C library, called EGLIBC. Most of the desktop Linux systems use GLIBC. I would like to know whether there is any standard method how to redistribute binary programs built on a modern Ubuntu so that they can be executed on any older Linux system ? How to find all required dependencies (glibc version, dynamic libraries) ?

    Read the article

  • &quot;CLR Enabled&quot; is not required to use CLR built-ins

    - by AaronBertrand
    Books Online articles referencing built-in CLR functions (such as FORMAT() ) have a remark similar to the following: "FORMAT relies on the presence of .the .NET Framework Common Language Runtime (CLR)." A lot of people seem to interpret this as meaning: "You must enable the sp_configure option 'CLR enabled' in order to use FORMAT()." Some then go on and suggest you run code similar to the following before you play with these functions: EXEC sp_configure 'show advanced options' , 1 ; GO RECONFIGURE...(read more)

    Read the article

  • How to install PyQt on Mac OS X 10.6.

    - by Jebagnanadas
    Hello all, I'm quite new to Mac OS X. when i tried to install PyQt on Mac Os X after installing python 3.1, Qt 4.6.2 and SIP 4.10.1 i encounter the following error when i execute $python3 configure.py command. Determining the layout of your Qt installation... This is the GPL version of PyQt 4.7 (licensed under the GNU General Public License) for Python 3.1 on darwin. Type '2' to view the GPL v2 license. Type '3' to view the GPL v3 license. Type 'yes' to accept the terms of the license. Type 'no' to decline the terms of the license. Do you accept the terms of the license? yes Checking to see if the QtGui module should be built... Checking to see if the QtHelp module should be built... Checking to see if the QtMultimedia module should be built... Checking to see if the QtNetwork module should be built... Checking to see if the QtOpenGL module should be built... Checking to see if the QtScript module should be built... Checking to see if the QtScriptTools module should be built... Checking to see if the QtSql module should be built... Checking to see if the QtSvg module should be built... Checking to see if the QtTest module should be built... Checking to see if the QtWebKit module should be built... Checking to see if the QtXml module should be built... Checking to see if the QtXmlPatterns module should be built... Checking to see if the phonon module should be built... Checking to see if the QtAssistant module should be built... Checking to see if the QtDesigner module should be built... Qt v4.6.2 free edition is being used. Qt is built as a framework. SIP 4.10.1 is being used. The Qt header files are in /usr/include. The shared Qt libraries are in /Library/Frameworks. The Qt binaries are in /Developer/Tools/Qt. The Qt mkspecs directory is in /usr/local/Qt4.6. These PyQt modules will be built: QtCore. The PyQt Python package will be installed in /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages. PyQt is being built with generated docstrings. PyQt is being built with 'protected' redefined as 'public'. The Designer plugin will be installed in /Developer/Applications/Qt/plugins/designer. The PyQt .sip files will be installed in /Library/Frameworks/Python.framework/Versions/3.1/share/sip/PyQt4. pyuic4, pyrcc4 and pylupdate4 will be installed in /Library/Frameworks/Python.framework/Versions/3.1/bin. Generating the C++ source for the QtCore module... sip: Usage: sip [-h] [-V] [-a file] [-b file] [-c dir] [-d file] [-e] [-g] [-I dir] [-j #] [-k] [-m file] [-o] [-p module] [-r] [-s suffix] [-t tag] [-w] [-x feature] [-z file] [file] Error: Unable to create the C++ code. Anybody here installed PyQt on Mac OS X 10.6.2 successfully.. Any help would be much appreciated.. Thanks in advance..

    Read the article

  • How to specify VNC port number with Mac OS X built-in VNC client?

    - by Eonil
    Is it possible to specify VNC port number in built-in VNC client of Mac OS X? I'm trying to connect to Xen VPS machine with Finder's built-in VNC client. I used address like this. vnc://server:port But it fails because it uses another port, and Finder's built-in VNC cannot handle port number. As I know it handles the number after colon as display-number, not a port number. Is there a way to specify port number on the VNC client? Or any workaround for this? (port forwarding??? I have no idea about it...)

    Read the article

  • How to specify VNC port number with Mac OS X built-in VNC client?

    - by Eonil
    Is it possible to specify VNC port number in built-in VNC client of Mac OS X? I'm trying to connect to Xen VPS machine with Finder's built-in VNC client. I used address like this. vnc://server:port But it fails because it uses another port, and Finder's built-in VNC cannot handle port number. As I know it handles the number after colon as display-number, not a port number. Is there a way to specify port number on the VNC client? Or any workaround for this? (port forwarding??? I have no idea about it...)

    Read the article

  • How to built a hackintosh with original purchased Mac lion OS? [closed]

    - by Nabayan
    Does anyone here ever built Hackintosh by themselves? Actually I'me new in this technology and trying to get hold of Mac system where I can start coding in Xcoder. I've tried in VMplayer and was able to run the Mac environment but unfortunately was not able to run X-code. I've a original MacOS downloaded few days ago and want to built a hackintosh. There is no suitable information that came across to me which give step to step full proof procedure to built a hakintosh by self. It would be really really great if anyone can guide me here. Thanks. Naba

    Read the article

  • How can I disable my laptop's built-in keyboard?

    - by sam
    I have a HP Compaq Presario C700 laptop with Windows 7 installed on it. My laptop's keyboard is not working properly; some keys never work and some keys will keep on pressing. I've formatted the OS but it didn't solve my problem. I bought an external USB keyboard and it works well. As some keys in the built-in keyboard activate themselves, I still couldn't work effectively. After searching Google I tried the following steps to disable the built-in keyboard: Disabled keyboard drivers: This didn't work because when the system reboots, the driver gets installed again automatically. Installed irrelevant driver for keyboard: This failed - I couldn't install the driver. After rebooting it installed the correct driver automatically. Can anyone help explain how I can temporarily uninstall my built-in keyboard? I don't want to remove it manually (removing the hardware cable).

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • How to use IBM T42 laptop's built-in Bluetooth?

    - by B. Roland
    Hello! I have an IBM ThinkPad T42 laptop, and I have some troubles with built-in bluetooth, because in Hardware Drivers, there are no drivers for it, and in Bluetooth settings, it shows, that it has no BT devices. If I plug in an USB Bluetooth adapter, I can use easily Wammu for my mobile backup. I have no setting in BIOS, to enable, or disable it(if disable wireless refers to Wi-fi, but it is enabled too). Some outputs, what the community asked from me, in the IRC: $ sudo hcitool dev Devices: $ $ cat /proc/acpi/ibm/bluetooth No file or dir $ $ sudo modprobe bluetooth $ $ rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no $ But they couldn't solve my problems. What can I do?

    Read the article

  • Is there any reason lazy initialization couldn't be built into Java?

    - by Renesis
    Since I'm working on a server with absolutely no non-persisted state for users, every User-related object we have is rolled out on every request. Consequently I often find myself doing lazy initialization of properties of objects that may go unused. protected EventDispatcher dispatcher = new EventDispatcher(); Becomes... protected EventDispatcher<EventMessage> dispatcher; public EventDispatcher<EventMessage> getEventDispatcher() { if (dispatcher == null) { dispatcher = new EventDispatcher<EventMessage>(); } return dispatcher; } Is there any reason this couldn't be built into Java? protected lazy EventDispatcher dispatcher = new EventDispatcher();

    Read the article

  • How do I EFI boot Zenbook Prime UX31A from built-in card reader?

    - by Richard Ayotte
    How do I EFI boot Zenbook Prime UX31A from built-in card reader? Ubuntu 12.10 64bit has been copied onto an a SD card which I'm trying to boot from. When I power on my Zenbook and press ESC, the SD card isn't an option. There is an "Add boot option" in the BIOS but it requires a few a things that I'm not quite sure of. Add boot option - I guess any identifier works here. I've been entering ubuntu. Select Filesystem - The only option available is PCI(1F|2)\DevicePath(Type 3, SubType 12)HD(Part1,Sig787e6287-xxxx-xxxx-xxxx-xxxxxxxxxxxx) Path for boot option - I've been entering ubuntu:\EFI\BOOT\BOOTx64.EFI Then I select Create and the new boot option appears in the boot menu but it doesn't work.

    Read the article

  • Can I execute an application built with Quickly (python - pygtk) on MS Windows?

    - by lesco
    I am working with QUICKLY, using Python and PyGtk. I know there is an option for packaging; so I can create a .DEB file. This is for Ubuntu. I was reading about PyGTK and it seems that PyGtk runs on MS Windows too. So, can I execute an app built with Quickly on MS Windows? If so, how? Which is the file (.py) that I have to execute on MS Windows? (a Quickly project has many .py files) Thanks. Ariel

    Read the article

  • Problem: How to display a Wordpress RSS feed in a browser that doesn't have a built in RSS reader?

    - by StephenMeehan
    If I can, i'd rather not use a service like FeedBurner. My setup: I've setup a RSS feed link on a self-hosted Wordpress website, clicking the RSS link in Safari shows the feed - because Safari has a built in RSS reader. Great. Unfortunately clicking the same RSS link in Chrome displays the raw XML feed. I know why this happens - Chrome doesn't have a built in RSS reader. I also assume this will be the same in older versions of Internet Explorer. Possible solution? I've noticed http://www.bbc.co.uk/news has a nice solution: Click the RSS feed (top tight of the page) in a RSS enabled browser (Safari) and it uses the built in RSS reader to display the RSS feed. Click the same RSS feed link in Chrome (Chrome has no built in RSS reader) it displays the RSS feed using what looks like a custom page. Is there a way to check if a browser has a built in RSS reader? How would I provide alternative content (like the BBC site) to a browser that doesn't have a RSS reader installed? Any help on this would be brilliant, thanks for taking the time to read this. Stephen

    Read the article

  • Built-in trackpad and keyboard stopped working on my MacBook when I paired with Bluetooth keyboard a

    - by Daveyjoe
    I paired the magic mouse and keyboard from my iMac with my MacBook, now I can't use the built-in trackpad and keyboard anymore. I've tried deleting them from the Bluetooth preference pane but that doesn't fix the issue. I've used other bluetooth mice and keyboards with this MacBook before and not had this problem. My only thought is that possibly it's related to the fact that the peripherals came with the iMac and my MacBook now thinks that it's an iMac and doesn't have a built-in trackpad/keyboard.

    Read the article

  • Should i scrap my own leader board and go for the Facebook built in one?

    - by Magnus Johansson
    Currently I'm rolling my own score and leader board functionality in my FB canvas game. In my game, users can see what score they have, in addition I have a public leader board where everybody can see all scores from all other users.(I also have possibility for each user to set themselves as anonymous in the leader board, if desired) But now I started thinking; why do I have my own leader board system? Facebook has this scores API and I started play around with it. It, of course, integrates well with Facebook, scores and achievement showing up in the ticker and what not. But it seems that I can't let each user see a public leader board in much the same way I currently have it. But it do let the users see their friends score. Let's face it, this is all what FB is all about, right? Friends. So the question is; should i scrap my own leader board and go for the Facebook built in one (and skip the public part of it)? My gut feeling says yes, but I wanted to hear what other thinks.

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • How to get my laptop to detect SD cards inserted into its built-in card reader?

    - by Candelight
    My laptop has a built-in SD Card reader but it cannot detect one when inserted. Here is the result when I type lspci from the terminal: 00:00.0 Host bridge: ATI Technologies Inc RS480 Host Bridge (rev 10) 00:01.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:06.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:07.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:12.0 IDE interface: ATI Technologies Inc IXP SB400 Serial ATA Controller (rev 80) 00:13.0 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller (rev 80) 00:13.1 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller (rev 80) 00:13.2 USB Controller: ATI Technologies Inc IXP SB400 USB2 Host Controller (rev 80) 00:14.0 SMBus: ATI Technologies Inc IXP SB400 SMBus Controller (rev 83) 00:14.1 IDE interface: ATI Technologies Inc IXP SB400 IDE Controller (rev 80) 00:14.2 Audio device: ATI Technologies Inc IXP SB4x0 High Definition Audio Controller (rev 01) 00:14.3 ISA bridge: ATI Technologies Inc IXP SB400 PCI-ISA Bridge (rev 80) 00:14.4 PCI bridge: ATI Technologies Inc IXP SB400 PCI-PCI Bridge (rev 80) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:05.0 VGA compatible controller: ATI Technologies Inc RS482 [Radeon Xpress 200M] 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) 05:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 05:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 05:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 05:09.0 Network controller: Ralink corp. RT2561/RT61 rev B 802.11g

    Read the article

  • Built-in card-reader doesn't work. HP Compaq nx6325 notebook

    - by user10940
    I have a HP-Compaq nx6325 notebook with an built-in card-reader (SD, MS/Pro, MMC, SM, XD) and the ubuntu (10.10.) don't see it. I've tried to install it manually, with this steps (and with this tifmxx driver), but doesn't work. The compile log: $ echo /home/tvera/downloads/cr_install /home/tvera/downloads/cr_install $ make -C /lib/modules/2.6.35-25-generic/build M=/home/tvera/downloads/cr_install make[1]: Entering directory `/usr/src/linux-headers-2.6.35-25-generic' CC [M] /home/tvera/downloads/cr_install/tifm_core.o In file included from /home/tvera/downloads/cr_install/tifm_core.c:12: /home/tvera/downloads/cr_install/linux/tifm.h:128: error: field ‘cdev’ has incomplete type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_uevent’: /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 1 of ‘add_uevent_var’ from incompatible pointer type include/linux/kobject.h:244: note: expected ‘struct kobj_uevent_env *’ but argument is of type ‘char **’ /home/tvera/downloads/cr_install/tifm_core.c:69: warning: passing argument 2 of ‘add_uevent_var’ makes pointer from integer without a cast include/linux/kobject.h:244: note: expected ‘const char *’ but argument is of type ‘int’ /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:161: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free’: /home/tvera/downloads/cr_install/tifm_core.c:170: warning: type defaults to ‘int’ in declaration of ‘__mptr’ /home/tvera/downloads/cr_install/tifm_core.c:170: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: At top level: /home/tvera/downloads/cr_install/tifm_core.c:177: error: unknown field ‘release’ specified in initializer /home/tvera/downloads/cr_install/tifm_core.c:178: warning: initialization from incompatible pointer type /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:190: error: implicit declaration of function ‘class_device_initialize’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_add_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:211: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) /home/tvera/downloads/cr_install/tifm_core.c:211: error: (Each undeclared identifier is reported only once /home/tvera/downloads/cr_install/tifm_core.c:211: error: for each function it appears in.) /home/tvera/downloads/cr_install/tifm_core.c:212: error: implicit declaration of function ‘class_device_add’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_remove_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:237: error: implicit declaration of function ‘class_device_del’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_free_adapter’: /home/tvera/downloads/cr_install/tifm_core.c:243: error: implicit declaration of function ‘class_device_put’ /home/tvera/downloads/cr_install/tifm_core.c: In function ‘tifm_alloc_device’: /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘struct device’ has no member named ‘bus_id’ /home/tvera/downloads/cr_install/tifm_core.c:275: error: ‘BUS_ID_SIZE’ undeclared (first use in this function) make[2]: *** [/home/tvera/downloads/cr_install/tifm_core.o] Error 1 make[1]: *** [_module_/home/tvera/downloads/cr_install] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-2.6.35-25-generic' make: *** [all] Error 2 The output of lsusb: Bus 001 Device 005: ID 05e3:0702 Genesys Logic, Inc. USB 2.0 IDE Adapter Bus 003 Device 003: ID 0458:003a KYE Systems Corp. (Mouse Systems) NetScroll+ Mini Traveler Bus 003 Device 002: ID 08ff:2580 AuthenTec, Inc. AES2501 Fingerprint Sensor Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >