Search Results

Search found 18592 results on 744 pages for 'basic unix commands'.

Page 250/744 | < Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >

  • Can't boot into windows7 after installling Ubuntu-12.04 (LTS) on HP 6735s laptop 12.04

    - by Dimeji
    I installed Ubuntu-12.04 (LTS) through a USB drive and when it's done i cant load Windows 7 (Loader)(on/dev/sda1) in GRUB. When I select that, it displays a black screen and then returns to GRUB but i'm still able to boot into 12.04. Help me to solve this, I installed Boot-repair,I checked reinstall grub in the main option tab and checked place boot flag on sda1 windows7 in other options tab , re booted and it wasnt working this is was bootinfo summary http://paste.ubuntu.com/1206875/, i also tried sudo grub-install "(hd0)" sudo update-grub in the terminal and i gave me found Windows 7 (loader) on /dev/sda1 and rebooted but i still wasnt able to boot into windows 7, this is my bootinfoo summary after i typed those two commands http://pastebin.ubuntu.com/1206955/ please help me,I'm at wits end thanks Dimeji

    Read the article

  • Please Help with ATI Radeon 4250 and Xinerama

    - by Luis Enrique
    I am using ubuntu 11.10 fresh install and I am having problems making the second screen to work, I remember before I was able to do it by extending the desktop to a larger number like 3840 X 1080 but since I am new I completely forgot how to do it, Now I have a philips 230 E monitor full 1920 X 1080 and a Toshiba tv HDMI and I want to activate the second monitor to be able to use Xinerama but I don't know how to go about this, I want to keep the Phillips as a primary monitor since it's the smaller one and use the TV from time to time to play videos movies and presentations. I installed all the additional drivers and don't know what else to do. I would prefer if I can use a list of exact commands to copy and paste onto the terminal since that is all I know how to use LOL . Thank you so much for your help. If you need more details of my mother board or video card or cpu just ask thank you. Luis

    Read the article

  • Why do [flush-8:16] and [jbd2/sdb2-8] occasionally use 99.99% disk IO?

    - by ændrük
    Approximately twice a week, the entire graphical interface will lock up for about 10-20 seconds without warning while I am doing simple tasks such as browsing the web or writing a paper. When this happens, GUI elements do not respond to mouse or keyboard input, and the System Monitor applet displays 100% IOWait processor usage. Today, I finally happened to have GNOME Terminal already open when the problem started. Despite other applications such as Google Chrome, Firefox, GNOME Do, and GNOME Panel being unresponsive, the terminal was usable. I ran iotop and observed that commands named [flush-8:16] and [jbd2/sdb2-8] were alternately using 99.99% IO. What are these, and how can I prevent them from causing GUI unresponsiveness? Here is dumpe2fs /dev/sdb2, if it's relevant.

    Read the article

  • Six Rubens’ Tubes Combined Into a Fire-Based Music Visualizer [Video]

    - by Jason Fitzpatrick
    The last Rubens’ Tube setup we shared with you was but a simple single tube. This impressive setup is six independent tubes that register distinct frequencies of sound in a musical composition as standing flames. Check out the video to see it in action. Curious about the Rubens’ Tubes? Read more about the phenomenon here. [via Design Boom] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Is Oracle Solaris 11 Really Better Than Oracle Solaris 10?

    - by rickramsey
    If you want to be well armed for that debate, study this comparison of the commands and capabilities of each OS before the spittle starts flying: How Solaris 11 Compares to Solaris 10 For instance, did you know that the command to configure your wireless network in Solaris 11 is not wificonfig, but dladm and ipadm for manual configuration, and netcfg for automatic configuration? Personally, I think the change was made to correct the grievous offense of spelling out "config" in the wificonfig command, instead of sticking to the widely accepted "cfg" convention, but loathe as I am to admit it, there may have been additional reasons for the change. This doc was written by the Solaris Documentation Team, and it not only compares the major features and command sequences in Solaris 11 to those in Solaris 10, but it links you to the sections of the documentation that explain them in detail. - Rick Website Newsletter Facebook Twitter

    Read the article

  • How do I install pepper-flash on Chromium?

    - by user209900
    I have Chromium web browser on my Lubuntu 13.04 (pre-installed). I use LX Terminal (pre-installed) to write commands. I am trying to download flash player on Chromium using instructions on this site : sudo add-apt-repository ppa:skunk/pepper-flash After typing in my password, this worked. Now sudo apt-get update I didn't need to type in my password, as I continued on the same terminal, but got W:/ and E:/ fetch file errors sudo apt-get install pepflashplugin-installer I continued on the same terminal despite the fetch file errors... and they said pepflashplugin-installer could not be found. Is the last error because of fetch file errors, or because I need to download pepflash-plugin-installer somewhere? Or is it because of something else? I cannot download the Chrome browser, and not looking to use flash player on my Firefox web browser (installed using lubuntu software centre).

    Read the article

  • Useful utilities - PFE, Notepad++ and XML Notepad 2007

    - by TATWORTH
    All three editors are free to download and use!  PFE http://www.lancs.ac.uk/staff/steveb/cpaap/pfe XML Notepad 2007 http://www.microsoft.com/downloads/details.aspx?familyid=72d6aa49-787d-4118-ba5f-4f30fe913628&displaylang=en Notepad++ http://notepad-plus.sourceforge.net/uk/download.php PFE development has stopped and I have included it mainly for reference. It does however have the facility to: Run DOS commands and capture the output Run simple macros XML Note Pad 2007 is excellent if you need to data values from attributes or elements. Notepad++ has various add-ons available for it. - When you download it, be sure to run its internal update function. Needless to say, all three are better than Notepad!

    Read the article

  • How do I run Sim City 4?

    - by Jacob Larson
    I know that it is possible to run Sim City 4 Deluxe, as this page clearly proves so. http://appdb.winehq.org/objectManager.php?sClass=version&iId=10515 I have tried two different commands, and the paths are all right, but when I enter them, I simply get a blank terminal line. WINEDEUB=-all wine C:\\Program\ Files\ \(x86\)\\Maxis\\SimCity\ 4\ Deluxe\\Apps\\SimCity\ 4.exe -d:software -intro:off -CPUCount:1 env WINEPREFIX="/home/jacob/.wine" wine "C:\Program Files (x86)\Maxis\SimCity 4 Deluxe\Apps\SimCity 4.exe" -intro:off -CPUCount:1 So, what could be the problem? I'm fairly new, so I'm not really sure what info I need to provide. Thanks.

    Read the article

  • Monitoring Database disk space

    - by Michael Freidgeim
    An article Data files: To Autogrow Or Not To Autogrow? recommends NOT to rely on auto-grow, because it causing delays in unplanned times.We should mtonitor database files(both data and log), and if they close to max capacity, manually increase the size. However it doesn't give references, how to monitor the free space inside databases. I've tried to look how to do it. It can be done manually using   execute sp_spaceused for the database in question or  sp_SOS (can be downloaded from http://searchsqlserver.techtarget.com/tip/Find-size-of-SQL-Server-tables-and-other-objects-with-stored-procedure)Alternatively you can run SQL commands as suggested in Http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=82359 by Michael Valentine Jonesselect [FREE_SPACE_MB] = convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) from dbo.sysfiles aMore useful article Monitor database file sizes with SQL Server Jobs describes how to setup monitoring Finally I found the excellent articleManaging Database Data Usage With Custom Space Alerts, that can be followed even support personnel without much DBA experience.

    Read the article

  • Week in Geek: 50 Million Viruses and More on the Way Edition

    - by Asian Angel
    This week we learned how to backup and copy data between iOS devices, use Linux commands in Windows with Cygwin, boost email writing productivity with Microsoft Word Mail Merge, be more productive in Ubuntu using keyboard shortcuts, “restore the FTP service in XBMC, rename downloaded TV shows, access the Android Market in emulation”, and more Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Enjoy Clutter-Free YouTube Video Viewing in Opera with CleanTube Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper

    Read the article

  • Bash prompt doesn't print until I interact with console again

    - by durron597
    I don't even know where to begin to diagnose this one. Usually, when a command finishes, the prompt prints itself for the next command. However, that is not happening. Hard to explain with words, I'll just use an example: User@Machine:~$ cp /mnt/mountname/directory/textfile.txt . After waiting several seconds (far too long for this operation on a small file) I press Enter, and see: User@Machine:~$ cp /mnt/mountname/directory/textfile.txt . User@Machine:~$ User@Machine:~$ So clearly the operation had finished, but the prompt didn't display... until I pressed enter, and then BOTH prompts instantly displayed. This error does not happen with commands like cd.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Exam 71-516: Accessing Data with Microsoft .NET Framework 4

    - by Ricardo Peres
    I had the chance to take the beta version of exam 71-516 today. Here are my thoughts on it: first, I was rather annoyed to discover that I will only know if I passed or not about 8 weeks after the beta period expires (July, 02), which probably means September. It was a difficult exam, especially since I don't have any practice on some of the new Entity Framework options. The items covered, from the most covered to the least covered, were: Entity Framework (50-50 for POCO/Non-POCO) LINQ to SQL WCF Data Services Classic ADO.NET (DataSets, DataTables, DataAdapters, TableAdapters, Connections and Commands LINQ to XML Sync Framework (surprise!) All added up, I think it was a difficult exam. My advise is that you practice a lot! I will post the result as soon as I know it.

    Read the article

  • Online Multiplayer Game Architecture [on hold]

    - by Eric
    I am just starting to research online multiplayer game development and I have a high-level architectural question regarding how online multiple games function. I have server-side and client-side programming experience, and I understand how AJAX-esque transfer protocol operates. What I don't understand yet is how online multiple fits into all of that. For example, an online Tetris multiplayer game. Would both players have the entire Tetris game built out on their client-side and then get pushed "moves" from the other player via some AJAX-esque mechanism, in which case each client would have to be constantly listening via JavaScript for inbound "moves" and update the client appropriately? Or would each client build out the aesthetics and run a virtual server per game to which each client connects and thus pull and push commands in real-time via something like web sockets? I apologize if this question is too high-level and general, but I couldn't find anything online that offered this high-level of a perspective on the topic.

    Read the article

  • Winetricks fails to find program files directory

    - by EgyptLovesUbuntu
    I installed a fresh copy of Ubuntu 12 desktop then: Installed WINE from the Ubuntu Software Center. Installed WineTricks from the Ubuntu Software Center. When I type the following commands in the terminal: sudo winetricks dotnet40 I get this error message: wine cmd.exe /c echo '%ProgramFiles%' returned empty string If i try the command without sudo winetricks dotnet40 The output is as follows Executing w_do_call dotnet40 Executing load_dotnet40 ------------------------------------------------------ dotnet40 does not yet fully work or install on wine. Caveat emptor. ------------------------------------------------------ Executing mkdir -p /home/vectoruser/.cache/winetricks/dotnet40 mkdir: cannot create directory `/home/vectoruser/.cache/winetricks/dotnet40': Permission denied ------------------------------------------------------ Note: command 'mkdir -p /home/vectoruser/.cache/winetricks/dotnet40' returned status 1. Aborting. ------------------------------------------------------ My current user is vectoruser which i use to logon to Ubuntu The output of ls -ld /home/vectoruser /home/vectoruser/.cache /home/vectoruser/.cache/winetricks Gives: drwxr-xr-x 32 vectoruser vectoruser 4096 Aug 2 19:26 /home/vectoruser drwx------ 19 vectoruser vectoruser 4096 Aug 2 19:25 /home/vectoruser/.cache drwxr-xr-x 2 root root 4096 Aug 2 18:09 /home/vectoruser/.cache/winetricks

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • CodePlex Daily Summary for Monday, October 01, 2012

    CodePlex Daily Summary for Monday, October 01, 2012Popular ReleasesD3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.2.1001.1): Visual Ribbon Editor 1.2.1001.1 What's New: Ability to hide currently selected tab Password in the connection file is now being encrypted Esc and Enter keyboard keys are now supported in all dialogs Extended width of Prefix and Id fields in New Button and New Group dialogs Minor UI enchancements for Hide Button button Minor UI enchancements for View XML button and dialog File/assembly versioning for executable file Notes: Existing plain-text password will be automatically encrypt...Untangler: Untangler: First version of Untangler application available now.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Mugen Injection: Mugen Injection 3.0: Added a generic version of the fluent syntax. Big changes in fluent-syntax. Added support to resolve a parameters: - Factory: AnyCustomFunc{T}, AnyCustomFunc{IEnumerable<IInjectionParameters>,T}, AnyCustomFunc{IDictionary<string, object>,T}, AnyCustomFunc{IEnumerable<IInjectionParameters>,IDictionary<string, object>,T} - Lazy: AnyCustomLazy<T> - Binding metadata: ISetting Change binding builders, now you can configure them through the string settings. Increase performance. F...Aggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Tube++: Tube++ 4.0.0.43: Fix parser for downloading and add new default slate gray styleReadable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryCatchThatException: Release 1.0: This is the first relase for CatchThatException library at 29-9-2012CardPlay: a Solitaire Framework for .Net: 0.3.0: Added 5 games: Blockade, Chessboard, Colorado, Sly Fox, TwentyUltraFluid Modeling Suite: Beta 1: This release is experimental but contains a lot of features like: - xml serialization - multiple selection - clipboard copy/cut/paste on context menu There is the debug and the release (recommanded default) versions.PDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...sosoft: sosoft source: sosoft source include alarm clockRawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.New ProjectsAmazonGlacierGUI - GUI Client for Amazon Glacier | WPF: a GUI client for amazon glacier , Written in .NET 4, C#,WPF,MVVM & RavenDB for storageAthene: UML editor.Basic Helpdesk Application: Develop a simple application to be used in a network environment to aid support staff while working remotely with users to resolve problems in their local PC.CAPTCHA Solver: CSolver, is a CAPTCHA Solver for Simple CAPTCHAs.ComicExt: ComicExt Works with CBZ files decompress them into a folder of the same name and convert them into zip files and vice versa from a single folder.Copy Your Table Storage: Tool to permit copy table storage dotCMS: it is a content management system base on BlogEngine.NET and Ext.NETihongma??: ?????????????????????????,????????????????、??????,?????????????????????????????,??????????????????????。 ??????????????,??????;???????????????,???????Liam F: Basic mathematical functions implemented using AVX/AVX2MVC Solution: MVC Solution, including many mvc best practicesopengltest: opengl testsample project: Testing...Sériethèque: A small tvshow organizer with viewed option, link to addict7ed.com, rename file etc ...SHOIC (SUPER HIGH ORBIT ION CANION): SHOIC (Super Hight Orbit Ion Canion) es una aplicación derivada de LOIC (Low Orbit Ion Canion)SignalRServerTry: signal r trySilverlight Datagrid Plus: Silverlight Datagrid plus adds basic features to the Silverlight toolkit datagrid such as grouping, filtering, new record and formatting of the datagrid.Sitecore Modules: This is a home for all the Sitecore Shared Source modules distributed by 5 Limes. Over time we expect this solution to contain several projects.Spiritual Chanting Utility: A quick media loop utilitySQL Web Studio Pad: A Simple Web Based SQL Management IDE for managing Database objectsStockmarket Simulation: Stockmarket Simulationtest_proj: ok testTISSEAN.NET: This project is based on: TISEAN 3.0.1 Nonlinear Time Series Analysis Rainer Hegger,Holger Kantz, Thomas Schreiber With major contributions by Eckehard OlbrichUntangler: Class hierarchy viewer for .Net assemblyValarmathi Hospital: Patient management.Windows 8 App For SharePoint Online: Windows 8 app to SharePoint 2013 OnlineWPF Anti-DataBinding Anti-DataTemplate Extensions: WPF Anti-DataBinding Anti-DataTemplate Extensions offers a pure C# code-behind (no XAML) approach to working with WPF controls, starting with the WPF ListBox.Yandex Positions Parser: ?????? ??????? ????? ? ?????? ??????? ?? ???????? ???????? ??????.YunCMS: A simple CMS for build protal site.

    Read the article

  • what is wrong with this easy script

    - by alex
    what is wrong with this easy script? I just want to write an script which change my directory: A. I put below commands on the file witch its name is pathABC on the /home/alex directory, #!/bin/sh cd /home/alex/Documents/A/B/C echo HelloWorld B. also I did chmod +x pathABC , On the terminal when I am on the /home/alex directory, I run ./pathABC . But the output is just HelloWorld and the current directory remains with no change. I mean my directory remains as /home/alex and not go to the /home/alex/Documents/A/B/C. So where is wrong?

    Read the article

  • How can I prevent [flush-8:16] and [jbd2/sdb2-8] from causing GUI unresponsiveness?

    - by ændrük
    Approximately twice a week, the entire graphical interface will lock up for about 10-20 seconds without warning while I am doing simple tasks such as browsing the web or writing a paper. When this happens, GUI elements do not respond to mouse or keyboard input, and the System Monitor applet displays 100% IOWait processor usage. Today, I finally happened to have GNOME Terminal already open when the problem started. Despite other applications such as Google Chrome, Firefox, GNOME Do, and GNOME Panel being unresponsive, the terminal was usable. I ran iotop and observed that commands named [flush-8:16] and [jbd2/sdb2-8] were alternately using 99.99% IO. What are these, and how can I prevent them from causing GUI unresponsiveness? Details $ mount | grep ^/dev /dev/sda1 on / type ext4 (rw,noatime,discard,errors=remount-ro,commit=0) /dev/sdb2 on /home type ext4 (rw,commit=0) /dev/sda is an OCZ-VERTEX2 and /dev/sdb is a WD10EARS. Here is dumpe2fs /dev/sdb2, if it's relevant.

    Read the article

  • Helping install mrcwa and solve problems with f2py in Ubuntu 14.04 LTS

    - by user288160
    I am sorry if this is the wrong section but I am starting to get desperate, please someone help me... I need to install the program mrcwa-20080820 (sourceforge.net/projects/mrcwa/) because a summer project that I am involved. I need to use it together with anaconda (store.continuum.io/cshop/anaconda/), I already installed Anaconda and apparently it is working. When I type: conda --version I got the expected answer. conda 3.5.2 If I tried to import numpy or scipy with python or simple type f2py there are no errors. So far so good. But when I tried to install this program sudo python setup.py install I got these errors: running install running build sh: 1: f2py: not found cp: cannot stat ‘mrcwaf.so’: No such file or directory running build_py running install_lib running install_egg_info Removing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Writing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Obs: I am trying to use intel fortran 64-bits and Ubuntu 14.04 LTS. So I was checking f2py and tried to execute the program hello world f2py -c -m hello hello.f from here: cens.ioc.ee/projects/f2py2e/index.html#usage and I had some problems too: running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "hello" sources f2py options: [] f2py:> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c creating /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 Reading fortran codes... Reading file 'hello.f' (format:fix,strict) Post-processing... Block: hello Block: foo Post-processing (stage 2)... Building modules... Building module "hello"... Constructing wrapper function "foo"... foo(a) Wrote C/API module "hello" to file "/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 /hellomodule.c" adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c' to sources. adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7' to include_dirs. copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Found executable /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler using build_ext building 'hello' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmpf8P4Y3/tmp creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3 creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local/lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c:17: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c:2: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ compiling Fortran sources Fortran f77 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict Fortran f90 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FR -fPIC -xhost -openmp -fp-model strict Fortran fix compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local /lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' ifort:f77: hello.f /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -shared -shared -nofor_main /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.o /tmp/tmpf8P4Y3 /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.o /tmp/tmpf8P4Y3/hello.o -L/home/felipe /anaconda/lib -lpython2.7 -o ./hello.so Removing build directory /tmp/tmpf8P4Y3 Please help me I am new in ubuntu and python. I really need this program, my advisor is waiting an answer. Thank you very much, Felipe Oliveira.

    Read the article

  • Print to File in Black and White (not in greyscale)

    - by user2413
    Hi all, I have these pdf files of c++code and they are colored which would be cool, except that the network printer here is b&w and the printed out codes come in various shades of pale grey which makes them essentially unreadable (specially the comments). I would like everything (text, codes, commands,...) to be printed in the same (black) color. i've tried fuddling with the printer's properties, but the closest thing i see is the 'level of grey' tab, and there i have the choice between 'enhanced' and 'normal' (and it doesn't make a difference in my case). i've tried 'print to file', but i don't see any options there to print to b&w, I've tried installing the 'generic cups printer', but again no options to print to b&w. any idea ? (i'm on 10.10)

    Read the article

  • How DNS Works [Video]

    - by Jason Fitzpatrick
    Want an easy and visual way to explain DNS to a curious friend or cubemate? This clean and simple short video does a great job highlighting exactly what goes on during a typical DNS request. Last month we explained what DNS is and showed you why you might want to use alternate DNS servers; this short video serves as an excellent visual companion for our article. How DNS Works [YouTube] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • 10.04 drops to '(initramfs)' prompt on boot

    - by David Yenor
    I'm not sure what to do to solve the problem, I received this error upon boot. mount: mounting /dev/disk/by-uuid/f60e3ce2-0237-45bb-bf07-581d0090cbc7 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) _

    Read the article

  • #1045 Cannot log in to the MySQL server

    - by user1198291
    I am totally new in linux/ubuntu I am trying to setup lamp on my OS, however I've installed apache , php , mysql by following commands: sudo apt-get install apache2 sudo apt-get install php5 sudo apt-get install libapache2-mod-php5 sudo apt-get install mysql-server libapache2-mod-auth-mysql php5-mysql sudo apt-get install phpmyadmin everything works fine except that i totally cannot log into MySQL(which leads to phpmyadmin failure login) getting the errors : #1045 Cannot log in to the MySQL server Access denied for user 'root'@'localhost' (using password: YES) I googled the problem and also I have tried to reinstall all installed components, but the same result came up! in windows i usually modified the content of mysql configure file but in ubuntu nothing is as same as windows!:) can anybody help me on this, really need to setup lampp :-S thanks in advanced

    Read the article

  • ifconfig : Command 'ifconfig' is available in '/sbin/ifconfig'

    - by Sahil Grover
    My question is related to another open question. My echo $PATH gives me an output which is like /home/sahil/.rvm/gems/ruby-1.9.3-p125/bin:/home/sahil/.rvm/gems/ruby-1.9.3-p125@global/bin:/home/sahil/.rvm/rubies/ruby-1.9.3-p125/bin:/home/sahil/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/usr/games:/home/sahil/.rvm/bin{}:/home/android-sdks/{}:/home/android-sdks/platform-tools/{}:/home/android-sdks/tools/{}:/home/sahil/android-sdks/tools{}:/home/sahil/android-sdks/tools:/home/sahil/android-sdks/platform-tools/ But running ifconfig gives me an output like Command 'ifconfig' is available in '/sbin/ifconfig' The command could not be located because '/sbin' is not included in the PATH environment variable. This is most likely caused by the lack of administrative privileges associated with your user account. ifconfig: command not found after running command like given in other question export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" it runs ifconfig but blocks other commands of ruby rails or rvm. Seeking help how to resolve this. Also why this happens?

    Read the article

< Previous Page | 246 247 248 249 250 251 252 253 254 255 256 257  | Next Page >