Search Results

Search found 13697 results on 548 pages for 'linking errors'.

Page 397/548 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • System Center Essentials server running out of disk space due to stored old updates

    - by Ricket
    We have a System Center Essentials (SCE) server to filter updates to our laptops. We've configured it to download the update, and then the laptops get the update from this server; this of course reduces our internet bandwidth and the time it takes for employees to receive the updates, which reduces the complaints we get about how long updates take. However we currently have a total of 2,255 updates stored on the server. SCE gives a breakdown: Updates with installation errors: 29 Updates needed by computers: 280 Updates installed/up-to-date: 0 Updates with no status: 1946 Our little server has 68gb of hard disk space, and the updates are currently taking 32gb and counting. Some of the updates date back to 2003, but we can't figure out a way to delete them to free up space on the server. Right-clicking an update and clicking Uninstall threatens to remove the update from all computers, which is not what we want. Some of the updates even inform us upon viewing: This update has been replaced by a newer update. Before declining this update, it is recommended that you approve the new update first and verify that this update is no longer needed by any computers. How do you prevent your SCE server from filling its hard drive space? Is there a way to configure the server to only keep updates that are still needed? Furthermore, why (in the above breakdown of updates) are there so many updates with "no status" and 0 updates that are "installed/up-to-date"?

    Read the article

  • Unable to access stackexchange sites from this system

    - by Sandeepan Nath
    Earlier, I was not able to access most of the stackexchange sites like stackoverflow, programmers.SE etc. on my home Windows XP system. I was able to access only a few like http://meta.stackexchange.com and not even http://www.meta.stackexchange.com (note the www). I tried many other sites like http://www.stackoverflow.com, http://area51.stackexchange.com/ but was getting page not found errors on all browsers. Even pinging from terminal was saying destination host unreachable. I did not check recently but may be all SE sites are unreachable now. I was clueless about what could be the issue. I thought some firewall issue? So, I stopped AVG antivirus's firewall, then completely uninstalled it and even turned of windows firewall. But still not reachable even after fresh installation of Windows 7. Then I noticed a "Too many requests" notice on google. This page - http://www.google.co.in/sorry/?continue=http://www.google.co.in/# I don't know why this appeared but I guess somehow too many requests might have been sent to these sites and they blocked me. But in that case, SE would be smart enough to show a captcha like google. So, how to confirm the problem and fix it. Similar questions like these don't look solved yet - Unable to access certain websites Unable to Access Certain Websites I have lately started actively participating in lots of SE sites. There are new new questions popping up in my mind every time and I am not able to ask them. Please help! Thanks

    Read the article

  • WT-NMP - PHP-CGI randomly stops running with no error log

    - by alexfontaine
    We have recently installed WT-NMP and are currently running Php-Cgi with php 5.4.24. We are running fairly simple php scripts and when testing everything is running fine. Over the weekend we wanted to keep the server running test it over a longer period of time. The server and scripts ran fine all day on Friday, but sometime late on Saturday, the php-cgi stopped running. There are no errors in the error log (C:\WT-NMP\log). In the configuration (php.ini) I have the following options set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On html_errors = On error_log = "c:/wt-nmp/log/php_error.log" We also have the standard nginx.conf error logs: access_log "c:/wt-nmp/log/nginx_access.log"; error_log "c:/wt-nmp/log/nginx_error.log" warn; So, since the log directory is empty, I am assuming that the running php scripts and general nginx operations are not causing the php-cgi to stop. So my questions are: What else could cause the php-cgi to stop running? Are there any other options for logging that we could turn on that could help us track this down? Are there other log locations that we should be looking at? Thanks!

    Read the article

  • Centos yum install git-sv

    - by bob
    Running yum install on Centos yum install git-svn is producing the following errors: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * addons: mirror.eshk.hk * base: centos.01link.hk * epel: mirror.bjtu.edu.cn * extras: mirror.eshk.hk * rpmforge: apt.sw.be * updates: mirror.vpshosting.com.hk Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package git-svn.i386 0:1.7.3.4-1.el5.rf set to be updated --> Processing Dependency: perl(SVN::Core) for package: git-svn --> Processing Dependency: perl(Error) for package: git-svn --> Processing Dependency: perl(Term::ReadKey) for package: git-svn --> Running transaction check ---> Package perl-Error.noarch 1:0.17010-1.el5 set to be updated ---> Package perl-TermReadKey.i386 0:2.30-4.el5 set to be updated ---> Package subversion-perl.i386 0:1.4.2-4.el5_3.1 set to be updated --> Processing Dependency: subversion = 1.4.2-4.el5_3.1 for package: subversion-perl --> Finished Dependency Resolution subversion-perl-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: subversion = 1.4.2-4.el5_3.1 is needed by package subversion-perl-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: subversion = 1.4.2-4.el5_3.1 is needed by package subversion-perl-1.4.2-4.el5_3.1.i386 (base) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package.

    Read the article

  • Task Manager Does Not Start Every Time

    - by diek
    I have had a problem that started some time ago, 6 months maybe. I should have noted the first instance but I didn't. I am using Windows 7 Pro, 32bit. Under normal circumstances I can open up the the Task Manager, via the task bar or cntrl alt del. When I get a program stuck, causing a freeze or non-responsive system I try to open the task manager. It will not work. I have had plenty of similar problems in the past and I had no trouble getting it open. I have searched the internet but the only results I can find are when the task manager will not start under any situation. I am running ESET NOD32 as the anti-virus. The latest example happened when I opened a new tab in Google and tried to copy an image. Google accounts for at least 50% of the examples. Ran System File Checker tool, sfc /scannow as recommended on another post. No errors returned. Any guidance would be appreciated.

    Read the article

  • How to run VisualSvn Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSvn Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSvn Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSvn server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSvn Server starts up without a problem. When I bind VisualSvn server to port 8443 and start Site.com, then VisualSvn Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSvn Server to run on port 443 with IIS running on the same server?

    Read the article

  • Need to find jni.h in cmake on Mac

    - by Ilan Tal
    I am trying to make VTK compile on a Mac Air machine. I am using CMake 2.8-9, using Xcode4 as the generator. If I press the Configure button with VTK_WRAP_JAVA not checked, it will go with no errors. However I definitely need to use the wrap java since my main program is in Java and I need to get to VTK which is c++. As soon as I check the wrap Java, I get Could NOT find JNI. It apparently is looking for jni.h which in Linux there is no problem finding, but in the Mac it apparently can't find it. I did a locate jni.h and got new-host-2:~ geraldkolodny$ locate jni.h /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.7.sdk/System/Library/Frameworks/JavaVM.framework/Versions/A/Headers/jni.h /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/System/Library/Frameworks/JavaVM.framework/Versions/A/Headers/jni.h /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/include/jni.h I tried to manually put into JAVA_INCLUDE_PATH2 either entry 2 or 3 (without the jni.h at the end), but it still can't find jni.h. Xcode used to have a template for jni but that is now gone in the latest version. I am fresh out of ideas on how to solve this problem. I'd be grateful for any suggestions. Thanks, Ilan

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by user36428
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • Issue running 32-bit executable on 64-bit Windows

    - by David Murdoch
    I'm using wkhtmltopdf to convert HTML web pages to PDFs. This works perfectly on my 32-bit dev server [unfortunately, I can't ship my machine :-p ]. However, when I deploy to the web application's 64-bit server the following errors are displayed: (running from cmd.exe) C:\>wkhtmltopdf http://www.google.com google.pdf Loading pages (1/5) QFontEngine::loadEngine: GetTextMetrics failed () ] 10% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () QFontEngine::loadEngine: GetTextMetrics failed () ] 36% QFontEngineWin: GetTextMetrics failed () QFontEngineWin: GetTextMetrics failed () // ...etc.... and the PDF is created and saved... just WITHOUT text. All form-fields, images, borders, tables, divs, spans, ps, etc are rendered accurately...just void of any text at all. Server information: Windows edition: Windows Server Standard Service Pack 2 Processor: Intel Xeon E5410 @ 2.33GHz 2.33 GHz Memory: 8.00 GB System type: 64-bit Operating System Can anyone give me a clue as to what is happening and how I can fix this? Also, I wasn't sure what to tag/title this question with...so if you can think of better tags/title comment them or edit the question. :-)

    Read the article

  • CD-ROM Cant Be Accessed After Installing VMware Tools on VMware Server 2.0.2

    - by Optimal Solutions
    Using VMware Server 2.02, I set up a new VM (Windows XP Pro) applied all of the updates, added Windows addons from the install CD, etc... I got it to a stable point and up through that point I was able to access the CD-ROM drive (E: on my host). What I never did before was install "VMware Tools" and since it claims to give better mouse and video support, I gave it a shot. What it does is it places the install package in a virtual CD-ROM drive. I ran the install, no errors and it wants a reboot. I log back in after reboot and pop in the install CD for Microsoft Office 2003 and I receive the message "Please Insert A Disc Into Drive D:". Drive D: would be the next logical drive after the C: drive where I chose to install the OS. The message box sits there and if I click "Cancel", to return to Windows Explorer, the status bar seems to blink ever 1/2 second - as if its polling for a CD-ROM drive or something. No bangs or exclamations in the Device Manager for any hardware. I had taken a snapshot prior to the VMware Tools install and upon restoring it, the CD-ROM is back. I made copies of two other VMs, installed the VMware Tools on those VMs and both experienced the same issues: Windows 2003 Server and Windows 7 (32-bit). Has anyone seen this issue and know of a fix for this? It would be nice to have the better graphics and better mouse control AND use my CD-ROM drive as well! Thank you.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • running automated fsck on remote server

    - by GriffinHeart
    I had another question about df, and now i came to conclusion i need to run fsck my partition, i've been reading about it and would like some advice, if possible. The situation is like this, no physical access to the server and i want to run fsck. from what i read i just need to touch /forcefsck and when i reboot it will run fsck. My question is, at its basis, with what arguments will the fsck run? Will it need user input to correct errors, etc? and after running will it save a log of what happened? if this was how it ran it would be perfect, anyway of enforcing that on reboot? fsck -v -p /machine/disk/p1 2>&1 > fscklog.txt Also here they describe this: it's also a good idea on debian and debian-derivatives like ubuntu to edit /etc/default/rcS on remote servers and set "FSCKFIX=yes" that adds "-y" to the boot time fsck, so it doesn't risk the remote server being stuck waiting for someone to login at the console and run fsck. But on Centos that doesn't seem to exist I only have ssh access at the moment so that is why i'm being so picky with it. here's some info about disks and mounted volumes on the server: http://pastebin.centos.org/33314 Thanks.

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • RHEL 5.3 Kickstart - How specify location of individual package in Workstation folder?

    - by Ed
    I keep getting "package does not exist" errors during the install. I made a kickstart ISO to create an unattended install of a RHEL 5.3 build machine for C++ software releases. It pulls the kickstart config file from our internal web server. This is handy; it makes it easy to test and modify without having to make a new ISO. And I plan to check it in to version control if I can get it working. Anyway, the rpm packages are located in two folders on the disk; Client and Workstation. The packages install fine for the ones that are physically located under the Client folder. It cannot find those under the Workstation folder such as as doxygen and subversion complaining that packages do not exist. Is there a way to specify the individual package location? # ----------------------------------------------------------------------------- # P A C K A G E S # ----------------------------------------------------------------------------- %packages @gnome-desktop @core @base @base-x @printing @development-tools emacs kexec-tools fipscheck xorg-x11-server-Xnest xorg-x11-server-Xvfb #Packages Located in Workstation Folder *** Install can not find any of these ?? bison doxygen gcc-c++ subversion zlib-devel freetype-devel libxml2-devel Thanks in advance, -Ed

    Read the article

  • Filtered Router Interface

    - by jviotti
    I'm having some problems with a Scientific-Atlanta DPR2320R2. In specific with the WIFI. A few months ago I changed its password and username and now I can't remember. So I tried cracking it with Hydra but it drove things worse. Content of webadmin was rendered partial, and threw lot of errors. I then reseted the router. I found myself abled to browse the web with ethernet-connected pc. Wifi is configured by registering the device's MAC Address, and indeed the router has been reseted and register MAC address were lost. No device could connect to wifi. In fact, the device does not even recognize the network. I tried the pointing to 192.168.0.1 to restablish the MAC's. But I couldn't connect to the router access point. Tried listing up hosts: $ nmap -sP 192.168.0.0/24 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:18 ART Host 192.168.0.1 is up (0.0018s latency). Host 192.168.0.11 is up (0.00025s latency). Nmap done: 256 IP addresses (2 hosts up) scanned in 59.62 seconds Then checked 192.168.0.1 was really up by sending pings. It responded to all my pings. I quick-scanned the access point: $ nmap 192.168.0.1 Starting Nmap 5.00 ( http://nmap.org ) at 2012-12-11 01:08 ART Interesting ports on 192.168.0.1: Not shown: 999 closed ports PORT STATE SERVICE 80/tcp filtered http Nmap done: 1 IP address (1 host up) scanned in 6.73 seconds Look the state of the port 80: FILTERED. I'm pretty confused now. Any suggestion would be appreciated. Thanks in advance.

    Read the article

  • Blender refuses to start

    - by Sekhemty
    I'm trying to run Blender under Linux, but I'm unable to do that, whenever I try I get some errors. I'm using Kubuntu 12.04 with KDE 4.11.1. This is my video card: ~$ lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RV610/M74 [Mobility Radeon HD 2400 XT] I used to have installed the fglrx proprietary Catalyst drivers, but lately they gave me some system-wide problems and I had to revert to the open source Mesa drivers (I don't think that these details are important, but just in case, the whole story is here). Whit the fglrx drivers Blender was running fine, but now, whenever I try to start it, I get this error message (some parts are in italian, but I think that they are easily understandable): ~$ blender connect failed: No such file or directory Writing: /tmp/blender.crash.txt Errore di segmentazione (core dump creato) The content of /tmp/blender.crash.txt is as follows: # Blender 2.68 (sub 5), Revision: 60150 # backtrace /usr/lib/blender/blender() [0x877a41f] [0xb7756400] /usr/lib/i386-linux-gnu/libLLVM-3.0.so.1(_ZN4llvm3ARM8SPRClassC1Ev+0x15) [0xa8f4a9d5] /usr/lib/i386-linux-gnu/libLLVM-3.0.so.1(+0x25ca48) [0xa8eefa48] /lib/ld-linux.so.2(+0xeeab) [0xb7765eab] /lib/ld-linux.so.2(+0xef94) [0xb7765f94] /lib/ld-linux.so.2(+0x12fa6) [0xb7769fa6] /lib/ld-linux.so.2(+0xeccf) [0xb7765ccf] /lib/ld-linux.so.2(+0x127f4) [0xb77697f4] /lib/i386-linux-gnu/libdl.so.2(+0xbe9) [0xb4ff9be9] /lib/ld-linux.so.2(+0xeccf) [0xb7765ccf] /lib/i386-linux-gnu/libdl.so.2(+0x133a) [0xb4ffa33a] /lib/i386-linux-gnu/libdl.so.2(dlopen+0x47) [0xb4ff9c97] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x3cbf0) [0xb7717bf0] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x4079d) [0xb771b79d] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(+0x1a3aa) [0xb76f53aa] /usr/lib/i386-linux-gnu/mesa/libGL.so.1(glXQueryVersion+0x2e) [0xb76f0cee] /usr/lib/blender/blender(_ZN15GHOST_WindowX11C1EP15GHOST_SystemX11P9_XDisplayRK10STR_Stringiijj18GHOST_TWindowStatei25GHOST_TDrawingContextTypebbt+0x11c) [0x8f54aec] /usr/lib/blender/blender(_ZN15GHOST_SystemX1112createWindowERK10STR_Stringiijj18GHOST_TWindowState25GHOST_TDrawingContextTypebbti+0xd7) [0x8f4f4a7] /usr/lib/blender/blender(GHOST_CreateWindow+0xb6) [0x8f4cf86] /usr/lib/blender/blender(wm_window_add_ghostwindows+0x205) [0x8799be5] /usr/lib/blender/blender(WM_check+0x50) [0x877b670] /usr/lib/blender/blender(wm_homefile_read+0x111) [0x87859f1] /usr/lib/blender/blender(WM_init+0xd2) [0x8787872] /usr/lib/blender/blender(main+0xe6e) [0x873848e] /lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0xb4e694d3] /usr/lib/blender/blender() [0x8778a99] The only thing that I can guess from this report is that the mesa drivers are somewhat involved, as I already suspected, but I don't have a clue on what I need to do to try to solve the issue.

    Read the article

  • windows 7 64 bit visual studio 2008 libtiff build nmake error

    - by user1244539
    I am trying to build tiff 4.0.2 on my Windows 7 x64 system with Visual Studio 2008, but it was showing errors: C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2347) : error C2061: syntax error : identifier 'QINT' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2362) : error C2059: syntax error : '}' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2397) : error C2061: syntax error : identifier 'JOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2397) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2398) : error C2061: syntax error : identifier 'PJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2398) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2399) : error C2061: syntax error : identifier 'NPJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2399) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2400) : error C2061: syntax error : identifier 'LPJOYCAPS' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2400) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2146: syntax error : missing ')' before identifier 'pjc' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2081: 'LPJOYCAPSA' : name in formal parameter list illegal C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2061: syntax error : identifier 'pjc' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ';' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ',' C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\mmsystem.h(2534) : error C2059: syntax error : ')' NMAKE: fatal error u1077: "c:\program files(x86)\microsoft visual studio 9.0\vc\bin\cl.exe": return code '0x2' This is what I was doing: Extracted tiff 4.0.2 In VS 2008 x64 Win 7 command prompt setting the environment for x86 by running vcvars32.bat Changing the path to tiff 4.0.2/libtiff folder Running nmake /f makefile.vc to create a static library of libtiff Following these steps in Windows XP generates the .lib file but in Windows 7 it fails. This is the first time I'm making any .lib files.

    Read the article

  • Compile PHP 5.3.2 with intl extension on Snow Leopard 10.6.3

    - by fsb
    Does anyone have some tips on compiling PHP's intl extension on PHP? I'm getting compile errors each way I try it and I've been googling for ages and getting nowhere. Any help greatly appreciated. When make gets to the huge gcc command to compile libphp5.bundle, I get the following error: Undefined symbols: "___gxx_personality_v0", referenced from: icu_4_2::MessageFormatAdapter::getArgTypeList(icu_4_2::MessageFormat const&, int&)in msgformat_helpers.o _umsg_parse_helper in msgformat_helpers.o _umsg_format_arg_count in msgformat_helpers.o _umsg_format_helper in msgformat_helpers.o CIE in msgformat_helpers.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libs/libphp5.bundle] Error 1 My compile commands are: MACOSX_DEPLOYMENT_TARGET=10.6 CFLAGS="-arch x86_64 -g -Os -pipe -no-cpp-precomp" CCFLAGS="-arch x86_64 -g -Os -pipe" CXXFLAGS="-arch x86_64 -g -Os -pipe" LDFLAGS="-arch x86_64 -bind_at_load" export CFLAGS CXXFLAGS LDFLAGS CCFLAGS MACOSX_DEPLOYMENT_TARGET ./configure --prefix=/usr \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --sysconfdir=/private/etc \ --with-apxs2=/usr/sbin/apxs \ --enable-cli \ --with-config-file-path=/etc \ --with-libxml-dir=/usr \ --with-openssl=/usr \ --with-zlib=/usr \ --with-bz2=/usr \ --with-curl=/usr \ --with-gd \ --with-jpeg-dir=/src/jpeg/jpeg-local \ --with-png-dir=/usr/X11R6 \ --with-freetype-dir=/usr/X11R6 \ --with-xpm-dir=/usr/X11R6 \ --with-ldap=/usr \ --with-ldap-sasl=/usr \ --enable-mbstring \ --enable-mbregex \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-pdo-mysql=mysqlnd \ --with-mysql-sock=/var/mysql/mysql.sock \ --with-iodbc=/usr \ --enable-shmop \ --with-snmp=/usr \ --enable-soap \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --with-xmlrpc \ --with-iconv-dir=/usr \ --with-xsl=/usr \ --with-pcre-regex=/src/pcre/pcre-local/usr/local \ --with-pcre-dir=/src/pcre/pcre-local/usr/local \ --with-icu-dir=/usr/local \ --enable-intl export EXTRA_CFLAGS="-lresolv" make

    Read the article

  • Error when attempting to do a differential or incremental backup of Exchange using ntbackup

    - by voon
    Hi folks, We're running Small Business Server 2003 here. I was reviewing our backup procedures lately and noticed in the ntbackup logs that the differential backups of Exchange were failing with the error: (SERVERNAME)\Microsoft Information Store\First Storage Group is not a valid drive, or you do not have access. A quick search of google found this MS KB article: http://support.microsoft.com/kb/555613 However, both of the suggested fixes don't to apply to our problem. First solution is to make sure the backup media is formatted and has adequate space. Well, our backup target is a 1 TB external hard drive with about 600 gigs of free space. (A full backup of our Exchange DB is currently around 5 GB) The second suggested fix is to "perform a full backup before trying to do incremental". And again, that can't it because we are doing full backups twice a week. There are no errors in the application log, just entries for ntbackup starting and ending. I've also tested doing an differential & incremental backup onto the server's internal drive, which unsurprising still did not work. I could get around this problem by always doing a full backup of Exchange but I kind of like the idea of being space efficient with doing differential backups. Anyone got any ideas?

    Read the article

  • Amazon EC2 instance was not available for few minutes (amazon showed that everything ok)

    - by Salvador Dali
    Few minutes ago my amazon Ec2 instance was unavailable for a few minutes. During this time neither I was able to connect to web-site with http, nor I was able to ssh to it. Also I was not able to connect to my amazon management console for some time (less than amount of unavailability of my instance). When I was able to connect to management console, it was showing me that everything is running smoothly (but I still was not able to connect to instance in any way for a minute or two). During this time I have checked their status page just to see that there is no issues (my instance is in Ireland and there is nothing wrong there today). After that I was able to log in. I checked my logins with last to see that no one except me was logging in. I also looked in apache logs and there was no errors or warnings during this time. Right now when I see my amazon monitor, I see a small spike in CPU in last 15 minutes (but this is from 10% to like 20%) I have no idea what can it be (I have never experienced anything like this before) and therefore I have no idea how scared should I be or what else should I look for. Can anyone give me a hint what my actions should be in such situation?

    Read the article

  • Apache2 random 403 error & info server busy logs on Ubuntu

    - by risyasin
    Hello, I have a strange situation with apache2. Meanless, random 403 errors. Any page (html, php etc.) normally working. but if i request repeatedly by pressing refresh button of browser. it interrupts & sends a 403 randomly. after a few seconds it works again. in the error log, i see client denied by server configuration. main error log of apache says [info] server seems busy, (you may need to increase StartServers, or Min/MaxSpareServers), spawning 8 children, there are 99 idle, and 137 total children my current values IfModule mpm_prefork_module StartServers 120 MinSpareServers 100 MaxSpareServers 200 MaxClients 256 MaxRequestsPerChild 500 /IfModule i've increased 10 by 10. from 20. but nothing solved. i've disabled KeepAlive. What may cause this problem ? thank you in advance. a fresh install Ubuntu server x86 8.04.4 Virtualmin from it's website (not from debian repositories). Linux 2.6.24-27-server #1 SMP i686 - Apache 2.2.8 Mpm prefork Virtualmin version 3.78.gpl GPL PHP Version 5.2.4-2ubuntu5.10 Loaded modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) actions_module shared) alias_module (shared) auth_basic_module (shared) auth_digest_module (shared) uthn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) uthz_host_module (shared) authz_user_module (shared) autoindex_module (shared) ache_module shared) cgi_module (shared) deflate_module (shared) dir_module (shared) env_module (shared) expires_module (shared) fcgid_module (shared) file_cache_module (shared) eaders_module (shared) mime_module (shared) mime_magic_module (shared) evasive20_module shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) etenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK

    Read the article

  • GlusterFS is failing to mount on boot

    - by J. Pablo Fernández
    I'm running the official GlusterFS 3.5 packages on Ubuntu 12.04 and everything seems to be working fine, except mounting the GlusterFS volumes at boot time. This is what I see in the log files: [2014-06-13 08:52:28.139382] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.5.0 (/usr/sbin/glusterfs --volfile-server=koraga --volfile-id=/private_uploads /var/www/shared/private/uploads) [2014-06-13 08:52:28.147186] I [socket.c:3561:socket_init] 0-glusterfs: SSL support is NOT enabled [2014-06-13 08:52:28.147237] I [socket.c:3576:socket_init] 0-glusterfs: using system polling thread [2014-06-13 08:52:28.148183] E [socket.c:2161:socket_connect_finish] 0-glusterfs: connection to 176.58.113.205:24007 failed (Connection refused) [2014-06-13 08:52:28.148236] E [glusterfsd-mgmt.c:1601:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: koraga (No data available) [2014-06-13 08:52:28.148251] I [glusterfsd-mgmt.c:1607:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2014-06-13 08:52:28.148477] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x27) [0x7fe077f8e0f7] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0x1a4) [0x7fe077f91cc4] (-->/usr/sbin/glusterfs(+0xcada) [0x7fe078655ada]))) 0-: received signum (1), shutting down [2014-06-13 08:52:28.148513] I [fuse-bridge.c:5444:fini] 0-fuse: Unmounting '/var/www/shared/private/uploads'. My fstab contains: proc /proc proc defaults 0 0 /dev/xvda / ext4 noatime,errors=remount-ro 0 1 /dev/xvdb none swap sw 0 0 /dev/xvdc /var/lib/glusterfs/brick01 ext4 defaults 1 2 koraga:/private_uploads /var/www/shared/private/uploads glusterfs defaults,_netdev 0 0 Any ideas what's going on and/or how to fix it?

    Read the article

  • Group Policy installation failed error 1274

    - by David Thomas Garcia
    I'm trying to deploy an MSI via the Group Policy in Active Directory. But these are the errors I'm getting in the System event log after logging in: The assignment of application XStandard from policy install failed. The error was : %%1274 The removal of the assignment of application XStandard from policy install failed. The error was : %%2 Failed to apply changes to software installation settings. The installation of software deployed through Group Policy for this user has been delayed until the next logon because the changes must be applied before the user logon. The error was : %%1274 The Group Policy Client Side Extension Software Installation was unable to apply one or more settings because the changes must be processed before system startup or user logon. The system will wait for Group Policy processing to finish completely before the next startup or logon for this user, and this may result in slow startup and boot performance. When I reboot and log in again I simply get the same messages about needing to perform the update before the next logon. I'm on a Windows Vista 32-bit laptop. I'm rather new to deploying via group policy so what other information would be helpful in determining the issue? I tried a different MSI with the same results. I'm able to install the MSI using the command line and msiexec when logged into the computer, so I know the MSI is working ok at least.

    Read the article

  • Perl EPIC Not recognising installed CPAN modules

    - by Recc
    Eclipse on a mac, was working fine adding new modules until I Installed Text::CSV_XS which Eclips doesn't recognise as added to @INC For instance use strict; use SOAP::Transport::HTTP; SOAP::Transport::HTTP::CGI->dispatch_to('C2FService')->handle; BEGIN { package C2FService; use vars qw(@ISA); @ISA = qw(Exporter SOAP::Server::Parameters); use SOAP::Lite; sub c2f { my $self = shift; my $envelope = pop; my $temp = $envelope->dataof("//c2f/temperature"); return SOAP::Data->name( 'convertedTemp' => ( ( ( 9 / 5 ) * ( $temp->value ) ) + 32 ) ); } } use SOAP::Transport::HTTP; is marked as error if I comment it out use SOAP::Lite; is in turn marked as an error, not found etc the usual if a module is not installed. Both are installed with CPAN and $ perl -c soap-test.pl post-code-check.pl syntax OK Perl is fine CPAN tests are all pass, the code works, only EPIC lags behind. $ pwd && ls /opt/local/lib/perl5/site_perl/5.12.4/SOAP Client.pod Lite Server.pod Constants.pm Lite.pm Test.pm Data.pod Packager.pm Trace.pod Deserializer.pod SOM.pod Transport Fault.pod Schema.pod Transport.pod Header.pod Serializer.pod Utils.pod And if I have use errors in the start of my files the rest of the source is not error checked..

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >