Search Results

Search found 1751 results on 71 pages for 'builds'.

Page 47/71 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Built local glibc, broke system, how do I ssh without parsing the .bashrc?

    - by Mikhail
    The cluster I am on had really old build tools and I needed to use CUDA5. I'm a pretty clever dude and I planned on building the necissary tools. So, I built a local copy of gcc, bintools, and glibc. Everything a CUDA5 could want. All builds finished without error. and I tested gcc and bintools. Everything was wonderful and I built and ran a few of the programs. I set up the LD_LIBRARY_PATHs in the .bashrc and logged back in, expecting a productive night ahead. To my horror I realized that everything is dynamically linked. Now I can't do simple commands like ls [ex@uid377 ~]$ ls ls: error while loading shared libraries: __vdso_time: invalid mode for dlopen(): Invalid argument and I can't do commands to fix the problem like rm or vim! Is there a way for me to ssh but also to ignore .bashrc file? Any suggestions are much appreciated. This machine is obviously under maintained and I don't know when I could have administrator support.

    Read the article

  • Google Chrome sync: limit for bookmarks & extensions?

    - by Lyubomyr Shaydariv
    Actually, Chrome is my favorite web-browser, and one of its most powerful features is synchronizing the actual data into a Google account. For the last years I gained a lot of bookmarks and from time to time browse the extensions gallery to find new valuable ones. Really, synchronizing between my work and home PC's freed me from manual sync. And for the recent months I experience strange glitches. I guess it may be caused by a lot of stored bookmarks (potentially about 3K [in estimate], but please don't ask why :)) and extensions (about 130 installed but only 10-15 daily used). I can mention the following strange things: Recently added bookmarks sometimes are not synchronized (e.g. I put a bookmark at work, but it's not guaranteed I can see it that evening), despite about:sync indicates a good sync process. Sometimes recently modified bookmarks appear in either (let's call) last at home or last at work bookmark folders. Sometimes bookmarks are not synced at all. (Moreover, Chromium versions may even crash) Extensions are not synced now at all. Perhaps, there's another reason, but Google Mail Checker and Google Reader Notifier do not show indicators of incoming e-mails and news. ... I'm not sure but it looks like I might exceed Chrome internal sync limits... Is it right? Are there any workarounds, or should I make a massive bookmarks/extensions cleanup (I really don't want it :()? I mostly use Google Chrome Canary builds, and the my current one is 12.0.732.0. Thanks in advance. Update #1 (2001-04-19): I removed about 50 extensions that I'm not interested in (or that I consider as trash), and gained pretty some results: The extensions count is below 100 (exactly 97); The chrome://extensions page does not get slow (or even frozen) any more on enabling/disabling/uninstalling extensions; The extensions are seem to be synchronized now again.

    Read the article

  • Merge changes in Microsoft Word documents

    - by Álvaro G. Vicario
    I'm using Microsoft Word 2002 to maintain some documentation. The documents are stored in a version control repository (Subversion) together with the source code it documents. My Subversion client (TortoiseSVN) comes with a little VBA script that allows to leverage the built-in revisions feature when merging different branches. In other words, when I want to copy changes from one document to another, Word compares both documents (source and target) and builds a third document that has the contents of the source doc tagged as revisions, so I can then review differences one by one and confirm or discard changes. While this is handy, it also means that making a single change to the source document forces me to review all the differences between both documents and discard all of them except the only actual change. My questions is... Do you know about an application or plug-in that's able to find the differences between two Word documents and apply those differences to a third document? (I know 2002 is very old but that's what my company gives me; I'm open to solutions that use newer versions though.)

    Read the article

  • Windows 7 taskbar groups to use vertical menus only, not thumbnails?

    - by Justin Grant
    I frequently have 10 or more windows of the same application (e.g. Outlook, Word, IE, etc.) open at one time. Windows 7's new taskbar grouping thumbnail feature shows a preview of open windows (aka thumbnails) when you single-click on the taskbar group for that application. But when I have over 10 windows open (more thumbnails than will fit horizontally on my screen), Windows reverts to a vertical menu. This is disconcerting since I need to train myself to deal with two separate behaviors and can never anticipate what I'm going to see when I click on a taskbar group. Furthermore, I find the thumbnails more difficult to visually scan through (vs. the vertical menu) because only 1-2 words of each window's title are shown. Typically that's not enough text to disambiguate and help me find the right window. I'd like to force Windows 7's taskbar grouping to always show a vertical menu (like in XP) instead of sometimes showing thumbnail previews and sometimes showing a vertical menu. Anyone know how (whether?) this can be done? UPDATE: BTW, I'm running the RTM version (build 7600) of Windows 7. There are apparently other solutions out there which work on earlier builds, but which don't work on the RTM build.

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • JSP Content Issue in Tomcat

    - by gautam vegeta
    There is one application where I work where there are still manual builds used i.e manually moving the servlet classes and jsp files from Dev to QA and finally to Prod. This is the method used in this application which cant be changed for some wierd reasons.BTW this is not the problem. We did a manual build where we transferred jsp files from QA to PROD recently. And we noticed that the jsp file content does not correspond to the updated jsp's but have the same content to the jsp file which was present in the server prior to the deployment. We did not re-start tomcat since jsp files upon updation automatically changes its contents. This problem persisted even after 6 hours of deployment If we consider the time standards which are different which may cause some delay. So to fix this we had to individually go into every jsp file and just type something save it and delete this change and save it.Then it worked perfectly. But finally the jsp file content before and after was never changed we just did this to change the modification date. If we think in terms of timestamp problem how can this be possible coz the old jsp files which were present in the server prior to deployment was atlest one month old and the ones getting deployed were defenitely newer than that. Why did this happen? This did not happen when we did same type of deployments earlier. How can we prevent this from happening in the future.

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • Anyone else experiencing high rates of linux server crashes today?

    - by Bron Gondwana
    Just today, Sat June 30th - starting soon after the start of the day GMT. We've had a handful of blades in different datacentres as managed by different teams all go dark - not responding to pings, screen blank. They're all running Debian Squeeze - with everything from stock kernel to custom 3.2.21 builds. Most are Dell M610 blades, but I've also just lost a Dell R510 and other departments have lost machines from other vendors too. There was also an older IBM x3550 which crashed and which I thought might be unrelated, but now I'm wondering. The one crash which I did get a screen dump from said: [3161000.864001] BUG: spinlock lockup on CPU#1, ntpd/3358 [3161000.864001] lock: ffff88083fc0d740, .magic: dead4ead, .owner: imapd/24737, .owner_cpu: 0 Unfortunately the blades all supposedly had kdump configured, but they died so hard that kdump didn't trigger - and they had console blanking turned on. I've disabled console blanking now, so fingers crossed I'll have more information after the next crash. Just want to know if it's a common thread or "just us". It's really odd that they're different units in different datacentres bought at different times and run by different admins (I run the FastMail.FM ones)... and now even different vendor hardware. Most of the machines which crashed had been up for weeks/months and were running 3.1 or 3.2 series kernels. The most recent crash was a machine which had only been up about 6 hours running 3.2.21.

    Read the article

  • How do I clear out the ssh-agent entries (on Mac OS X )?

    - by cwd
    I'm running Mac OS X, and it appears that after SSHing to several machines, using identity files, my 'ssh-agent' builds up a lot of identity / keys and then sometimes offers too many to a remote machine, causing them to kick me off before connecting: Received disconnect from 10.12.10.16: 2: Too many authentication failures for cwd It's pretty obvious what's happening, and this page talks about it in more detail: SSH servers only allow you to attempt to authenticate a certain number of times. Each failed password attempt, each failed pubkey/identity that is offered, etc, take up one of these attempts. If you have a lot of SSH keys in your agent, you may find that an SSH server may kick you out before allowing you to attempt password authentication at all. If this is the case, there are a few different workarounds. Rebooting clears the agent and then everything works OK again. I can also add this line to my .ssh/config file to force it to use password authentication: PreferredAuthentications keyboard-interactive,password Anyhow, I saw the note on the page I referenced talking about deleting keys from the agent, but I'm not sure if that applies on a Mac since they appear to be cleared after reboot anyhow. Is there a simple way to clear out all keys in the 'ssh-agent' (the same thing that happens at reboot)?

    Read the article

  • CLOSE_WAIT sockets burst - perhaps because of iptables settings?

    - by Fabrizio Giudici
    I have an Ubuntu 12.04 server virtual box where basically the installed software and configuration are the default ones, plus the installation of a jetty 6 server which servers a few websites. To keep things simple I didn't install apache httpd and used iptables for exposing jetty (which runs on the 8080 port) to the port 80. These are the results of /sbin/iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere localhost tcp dpt:http redir ports 8080 REDIRECT tcp -- anywhere Ubuntu-1104-natty-64-minimal tcp dpt:http redir ports 8080 Chain POSTROUTING (policy ACCEPT) target prot opt source destination I must confess I have a shallow comprehension of how iptables works, in particular for the different kind of chains. This thing works, but sometimes I have an explosion of sockets that stay permanently in CLOSE_WAIT state. I know about what this state means, but since I didn't write the code that manages servlets (they are handled by jetty) I can't fix the problem by patching my code. Eventually the amount of CLOSE_WAIT sockets builds up and makes the server not responsive, so I have to restart jetty. I've looked around for similar problems wth CLOSE_WAIT, and only found cases related to the programmer's code, or problems with Tomcat, not Jetty. I was wondering whether they could be related to a partially broken iptables configuration (the alternative is a bug in Jetty 6, but I first want to exclude other possible causes). Thanks.

    Read the article

  • Reduce "Metafile" memory usage?

    - by Jay Conrod
    My work computer (Windows 7 64-bit) spends a lot of time swapping memory when I switch between programs. This surprises me since I have 4 GB of RAM, and the programs I use aren't particularly RAM hungry (Outlook, Emacs, p4win, Firefox, various build tools). I downloaded RAMMap, and it shows over a gigabyte of memory used by "Metafile". From the Sysinternals blog: Metafile is part of the system cache and consists of NTFS metadata. NTFS metadata includes the MFT as well as the other various NTFS metadata files. ... In the MFT each file attribute record takes 1k and each file has at least one attribute record. Add to this the other NTFS metadata files and you can see why the Metafile category can grow quite large on servers with lots of files. So I understand what the "Metafile" data is... I work on large builds comprising hundreds of thousands of files (none are that big, but they add up to several gigabytes). My question is how can I reduce the amount of memory used by "Metafile"? I'm not actively using all those files at once, so why does Windows need to keep info in RAM? Restarting my machine every time I sync a new build is really annoying.

    Read the article

  • tmpreaper, --protect and a non-root user

    - by nsg
    Hi, I'm a little confused. I have a download directory that I want to remove all files older then 30 days with tmpreaper. Just one problem, the directory in question is a separate partition with a lost+found directory, of course I need to keep it so I added --protect 'lost+found', the problem is that tmpreaper outputs: error: chdir() to directory 'lost+found' (inode 11) failed: Permission denied (PID 30604) Back from recursing down `lost+found'. Entry matching `--protect' pattern skipped. `lost+found' I have tried with other pattern like lost* and so on... I'm running tmpreaper as a non-root user because there is no reason for superuser privileges because I own all files (except lost+found). Are I'm forced to run tmpreaper as root? Or are my shell-skills not as good as I thought? I guess the problem is: tmpreaper will chdir(2) into each of the directories you've specified for cleanup, and check for files matching the <shell_pattern> there. It then builds a list of them, and uses that to protect them from removal. Any thought and/or advice? Edit: The command I'm trying to run is something like $ /usr/sbin/tmpreaper -t --protect 'lost+found' 30d /mydir 1> /dev/null error: chdir() to directory `lost+found' (inode 11) failed: Permission denied Edit 2: I read the source code for tmpreaper-1.6.13 and found this if (safe_chdir (dirname)) exit(1); and if (chdir (dirname)) { message (LOG_ERROR, "chdir() to directory `%s' (inode %lu) failed: %s\n", dirname, (u_long) sb1.st_ino, strerror (errno)); return 1; } So it seems tmpreaper needs to be able to chdir in to all directories, ignored or not. I see two options left Run tmpreaper as root Move the download directory Find a alternative tool (tmpwatch?) I will give it some more research before i make a choice.

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • Managing rolling deployments in the cloud

    - by Josh Nankin
    Recently I've been experimenting with various cloud management tools like RightScale, Scalr, custom scripts for managing a variety of servers, each hosting several roles (app, db, load balancer, job queues, etc). The one thing I find lacking in most solutions is a way to do rolling deployments, i.e. running deployments sequentially across a number of servers with the same role. For instance, I dont want to build all of my webservers at the same time, as that will almost definitely result in some down time or 500s for my customers. I'd rather have one or two servers build at a time, while other servers are still available to handle requests. The other alternative is obviously to launch new servers that automatically update themselves on boot, but this isn't as cost effective, and most likely requires more time for the build to complete (it's faster to build on an existing server than to launch a new server and kill old ones). We've all heard of the big companies having the famous "push to build" button (companies like Twilio, Etsy, etc.) but it seems that they all have custom implementations of this. I'm not talking about a simple ssh-loop, clusterssh, or even an mcollective - I preferably want something with a nice simple interface that allows me to specify something like a RightScript or a Scalr script to run on a set of servers with a specific role, and it builds them sequentially. Does any one know of easy ways to get this done, or is this a candidate for a new open source project?

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • What can prevent a Server 2008 machine accessing its OWN UNC shares?

    - by Simon
    I need to set up a UNC share for my hosted dedicated server to access a share on itself. Unfortunately TFS requires a UNC share. I am on a Windows Server 2008 Standard SP2 64bit dedicated server behind a PIX 501 firewall hosted with GoDaddy. I just cannot get the server to access itself and get this error: Windows cannot access \\SERVER\SHARE Check the spelling of the name.. etc. I've found numerous questions about this but no answer to my problem. Server 2008 Standard x64 SP2 Workgroup - not domain Windows Firewall is off Computer browser service is on I am trying to access \\MYMACHINE\TFS-BUILDS by typing in - or double clicking. Neither works. Machine has single network card Filesharing wizard says share was ok Share was showing under 'Computer management' Permissions are set to 'everyone' full control No obvious errors in eventlog Reboot didn't fix it Unfortunately I cannot try to access other shares in or out of this machine because it is a hosted dedicated server and the only machine behind a hardware firewall. The only thing left i can think of is that the hardware firewall needs to be configured. I don't think it is this because we have a 2003 Server machine behind a different hardware firewall and that one works fine. What on earth is left?!

    Read the article

  • rpmbuild gives seg fault

    - by Deepti Jain
    I am trying to build an rpm using the rpmbuild tool. I have source code which build binaries around 30 GB. This software for which I am making the rpm has dozens of executables. When I copy only the binaries of a single executable (Eg. init) my rpm builds successfully. But when I dump the entire build to the rpm, rpmbuild does everything but gives a seg fault in the end. Here is my spec file: # This is a sample spec file for wget %define _topdir /root/mywget %define name source %define release 1 %define version 1.12 %define _builddir /root/mywget/BUILD/glenlivet %define _buildrootdir /root/mywget/BUILDROOT %define _buildroot /root/mywget/BUILDROOT %define _sourcedir /root/mywget/SOURCES BuildRoot: %{_buildroot} Summary: GNU source License: GPL Name: %{name} Version: %{version} Release: %{release} Source: %{name}-%{version}.tar.gz Prefix: /usr Group: Development/Tools %description The GNU sample program downloads files from the Internet using the command-line. %prep %setup -q -n glenlivet %build cd %{_builddir} make all %install rm -rf %{_buildrootdir} mkdir -p %{_buildrootdir}/bin cp -p -r %{_builddir}/build/obj-x64/* %{_buildrootdir}/bin/ %files %defattr(-,root,root) /bin/* If I only copy some of the binaries (let say one utility and its dependent binaries) it works fine. But when I try to copy the entire build, I get a seg fault. I get the seg fault after rpmbuild has executed these sections: %prep %build %install rpmbuild also processes my source file. Processing files: source-1.12-1 Finding Provides: Finding Requires: Finding Supplements: Provides:...... Requires:...... Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Checking for unpackaged file(s):/ usr/lib/rpm/check-files /root/mywget/BUILDROOT Segmentation fault Any clue what wrong is going on or where does rpmbuild fails? Thanks in advance

    Read the article

  • Developer hardware autonomy in a managed desktop environment [closed]

    - by Troy Hunt
    I’m looking for some feedback on how developer PCs are managed within environments that have a strict managed desktop policy (normally large corporations). For example, many corporate environments control the installation of software and the deployment of patches and virus updates through a centralised channel. This usually means also dictating the OS version and architecture (32 bit versus 64 bit) which will likely also mean standardised hardware configurations. I’m particularly interested in feedback from developers who work in this sort of environment but have a high degree of autonomy over their machines. This might mean choosing your own hardware vendor, OS type and version and perhaps how the machines are built and maintained. I have several specific questions: How do you satisfy the needs of security, governance etc whilst maintaining your autonomy? For example, how do you address concerns about keeping virus definitions and OS patches up to date? Do you have a process for gaining exemption from standard desktop builds and if so, what do you need to demonstrate in order to get this? How have you justified this need to the decision makers? Essentially, what is the benefit to your role as a developer by having this degree of autonomy? Thanks very much everyone. Update: There's a great post from Jean-Paul Boodhoo which addresses the developer tool component of the quesiton here: http://blog.jpboodhoo.com/TheFallacyOfTheStandardizedDeveloperMachineimage.aspx

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

  • Extending partition on linux gparted but not more space in the vm

    - by Asken
    I have a vm test installation of a linux running a build server. Unfortunately I just pressed ok when adding the disk and ended up with an 8gb drive to play with. Well into the test the builds are consuming more and more space, of course. The vm drive was resized to 21gb and using gparted I expanded the drive partitions and that all worked fine but when I go back into the console and do df there's still only 8gb available. How can I claim the other 13gb I added? fdisk -l Disk /dev/sda: 21.0 GB, 20971520000 bytes 255 heads, 63 sectors/track, 2549 cylinders, total 40960000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006d284 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 40959999 20229121 5 Extended /dev/sda5 501760 40959999 20229120 8e Linux LVM vgdisplay --- Volume group --- VG Name ct System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.29 GiB PE Size 4.00 MiB Total PE 4938 Alloc PE / Size 1977 / 7.72 GiB Free PE / Size 2961 / 11.57 GiB VG UUID MwiMAz-52e1-iGVf-eL4f-P5lq-FvRA-L73Sl3 lvdisplay --- Logical volume --- LV Name /dev/ct/root VG Name ct LV UUID Rfk9fh-kqdM-q7t5-ml6i-EjE8-nMtU-usBF0m LV Write Access read/write LV Status available # open 1 LV Size 5.73 GiB Current LE 1466 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Name /dev/ct/swap_1 VG Name ct LV UUID BLFaa6-1f5T-4MM0-5goV-1aur-nzl9-sNLXIs LV Write Access read/write LV Status available # open 2 LV Size 2.00 GiB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1

    Read the article

  • How do I get yum to see updates to a local repo without cleaning cache?

    - by Matt
    I have set up a local yum repository which I use to install test builds. For the testing purposes, my packages are versioned by <svn version number>.<date>.<time> (e.g. 12345.20110908.150404 The trouble is, once I make a new RPM, copy it to the repository directory and run createrepo $REPO_DIR, yum does not see the new RPM as being available. $ cd $REPO_DIR $ ls -1 repodata package-12345.20110908.150404-1.x86_64.rpm package-12345.20110908.174329-1.x86_64.rpm $ createrepo . # ...snip... $ rpm -q package package-12345.20110908.150404-1.x86_64 $ yum list --showduplicates package Installed Packages package.x86_64 12345.20110908.150404-1 @repo Available Packages package.x86_64 12345.20110908.150404-1 repo I can see the updates and grab them if I run yum clean all and then re-fetch the metadata, but I think this just means I need to be doing something else from the repo, as I don't have to do that for other yum repos. How do I need to set up my local repository so that I only need to run yum update from the client without having to clean my yum cache?

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • XBMC remote control key repeat

    - by Amigable Clark Kant
    Those who have used XBMC with a WiFi remote, such as an Android or iOS device with the official XBMC remotes, have probably seen this at one time or another: your remote stops working, when you press a key, that key is repeated very fast. Sometimes you can break the key repeat loop by pressing that same key again. Sometimes (as this particular morning) you can not. This problem has existed for literally years (it even occurs on the old XBOX only builds) but there seems to be no definite explanation as to what is causing it. I am asking for a workaround. (If you Google, you can find threads were people are ridiculed for bringing up this problem, which is one reason I ask here instead of on the official forums. Also the fact that this problem has persisted for so many years.) I am running, right now, XBMCBuntu Live Eden and the latest official iOS remote. Although, I have seen this problem on all combinations of remotes and XBMC versions I have tried over the years, which are many. (XBMC on Windows, Linux, OSX, remotes on Android and iOS.) Link to bug #136 for the Android remote. It's marked as "fixed" as of client version 636, but the problem is seen again in rev 730. Go figure. There is something fundamentally wrong with how keypresses are sent from the various remotes to XBMC, since this problem is seen across time, XBMC versions and iOS/Android.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >