Search Results

Search found 10755 results on 431 pages for 'cluster shared volume'.

Page 396/431 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • 64-bit linux kernel only seeing 3 of 4GB after upgrade...

    - by Blaine
    Hey everyone. I am running Ubuntu 9.04 64-bit on my macbook. I had 2GB of ram before, and everything ran great. I just upgraded to 2x2GB (4GB), but my system only sees 3GB of it. OS X, which I am dual booting, sees all 4GB. Also, my video performance is incredibly lacking. Before the upgrade my compiz benchmark was full at 80fps, and now it is at 22fps with very choppy window dragging. Has anyone ever heard of this on a 64-bit kernel? I just don't quite understand what could be the issue. 10$ uname -a Linux macbook 2.6.28-15-generic #49-Ubuntu SMP Tue Aug 18 19:25:34 UTC 2009 x86_64 GNU/Linux $ free -m total used free shared buffers cached Mem: 2953 1031 1921 0 114 427 -/+ buffers/cache: 489 2463 Swap: 7812 0 7812 9$ lsmod Module Size Used by i915 77960 2 drm 123232 3 i915 binfmt_misc 18572 1 ppdev 16904 0 btusb 21784 2 bridge 63776 0 stp 11140 1 bridge bnep 22912 2 vboxnetadp 109356 0 vboxnetflt 116972 0 vboxdrv 1721612 1 vboxnetflt uvcvideo 69640 0 compat_ioctl32 18304 1 uvcvideo videodev 45184 2 uvcvideo,compat_ioctl32 v4l1_compat 23940 2 uvcvideo,videodev lp 19588 0 parport 49584 2 ppdev,lp snd_hda_intel 557492 3 snd_pcm_oss 52352 0 snd_mixer_oss 24960 1 snd_pcm_oss snd_pcm 99464 2 snd_hda_intel,snd_pcm_oss arc4 10240 2 snd_seq_dummy 11524 0 ecb 11392 2 snd_seq_oss 41984 0 snd_seq_midi 15744 0 snd_rawmidi 33920 1 snd_seq_midi snd_seq_midi_event 16512 2 snd_seq_oss,snd_seq_midi snd_seq 66272 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event ath9k 310584 0 snd_timer 34064 2 snd_pcm,snd_seq snd_seq_device 16276 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq mac80211 251528 1 ath9k iTCO_wdt 21712 0 iTCO_vendor_support 12420 1 iTCO_wdt joydev 20992 0 video 29204 0 snd 78920 15 snd_hda_intel,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device applesmc 37700 0 output 11648 1 video soundcore 16800 1 snd pcspkr 11136 0 cfg80211 43680 1 mac80211 appletouch 19972 0 isight_firmware 11520 0 input_polldev 12688 1 applesmc intel_agp 39408 1 snd_page_alloc 18704 2 snd_hda_intel,snd_pcm led_class 13064 2 ath9k,applesmc hid_apple 15872 0 usbhid 47040 0 ohci1394 42164 0 ieee1394 108288 1 ohci1394 sky2 63364 0 fbcon 49792 0 tileblit 11264 1 fbcon font 17024 1 fbcon bitblit 14464 1 fbcon softcursor 10368 1 bitblit Some information from dmesg: [ 795.820163] ACPI: EC: GPE storm detected, transactions will use polling mode [ 1762.709516] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 1763.078130] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 2362.760889] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 2416.352084] ACPI: EC: missing confirmations, switch off interrupt mode. [ 3718.721095] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 3719.108914] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 4318.773266] [drm:i915_getparam] *ERROR* Unknown parameter 6 [ 9513.813066] CE: hpet increasing min_delta_ns to 15000 nsec [ 9693.815684] npviewer.bin[6736]

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • CUPS Web Admin Error 500 Unknown

    - by Floyd Resler
    I keep getting a 500 Unknown error whenever I navigate off the home page of my CUPS web admin. I'm sure I have something misconfigured but I'm not sure what. Here's my configuration: # # "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $" # # Sample configuration file for the CUPS scheduler. See "man cupsd.conf" for a # complete description of this file. # # Log general information in error_log - change "warn" to "debug" # for troubleshooting... LogLevel warn # Administrator user group... SystemGroup lpadmin sys root # Only listen for connections from the local machine. Listen 192.168.6.101:631 Listen /var/run/cups/cups.sock ServerName 192.168.6.101 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS BrowseAddress 192.168.6.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to the admin pages... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Restrict access to configuration files... AuthType Default Require user @SYSTEM Order allow,deny Allow From All Allow From 127.0.0.1 # Set the default printer/job policies... # Job-related operations must be done by the owner or an administrator... Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # Set the authenticated printer/job policies... # Job-related operations must be done by the owner or an administrator... AuthType Default Order deny,allow AuthType Default Require user @OWNER @SYSTEM Order deny,allow # All administration operations require an administrator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # All printer operations require a printer operator to authenticate... AuthType Default Require user @SYSTEM Order deny,allow # Only the owner or an administrator can cancel or authenticate a job... AuthType Default Require user @OWNER @SYSTEM Order deny,allow Order deny,allow # # End of "$Id: cupsd.conf.in 8805 2009-08-31 16:34:06Z mike $". #

    Read the article

  • Why does NX Client for Windows silently closes after connection?

    - by pavel
    Hey! I connect remotely to my Ubuntu server from Vista machine. Now I need to run a GUI application on the server (Wireshark). So I decided to use FreeNX server/client to view Ubuntu GUI on Vista I have successfully installed FreeNX on Ubuntu and NX Client on Vista. I was following this guide Unfortunately, now I found myself stuck with the following problem. At the client, the !M logo window appears, but after a few seconds that window just closes, even without showing any error message. Guys, I'm really stuck, please help! Maybe I should have installed some graphical environment on the server? These are the details from NX client, it seems there are no errors. ----------------- Info: Display running with pid '7768' and handler '0x670d24'. NXPROXY - Version 3.4.0 Copyright (C) 2001, 2007 NoMachine. See http://www.nomachine.com/ for more information. Info: Proxy running in client mode with pid '2168'. Session: Starting session at 'Sat Dec 19 10:58:35 2009'. Warning: Connected to remote version 3.3.0 with local version 3.4.0. Info: Connection with remote proxy completed. Info: Using WAN link parameters 768/24/1/0. Info: Using cache parameters 4/4096KB/16384KB/16384KB. Info: Using pack method 'adaptive-9' with session 'kde'. Info: Using ZLIB data compression 1/1/32. Info: Using ZLIB stream compression 1/1. Info: No suitable cache file found. Info: Forwarding X11 connections to display ':0'. Info: Listening to font server connections on port '11000'. Session: Session started at 'Sat Dec 19 10:58:35 2009'. Info: Established X server connection. Info: Using shared memory parameters 0/0K. Session: Terminating session at 'Sat Dec 19 10:58:37 2009'. Session: Session terminated at 'Sat Dec 19 10:58:37 2009'. -----------

    Read the article

  • ISC DHCP - Force clients to get a new IP address, instead of the being re-issued their previous lease's IP

    - by kce
    We are in the middle of a migration of our DHCP and DNS services from a Debian-based server to a Windows Server 2008 R2 implementation. The Debian server is running isc-dhcpd-V3.1.1. All of workstations are configured to have fixed-addresses between .3 and .40 (the motivation behind that choice is mostly management/political much like here). DHCP leases are given out in the range of .100 to .175. Statically configured servers live in the .200 block and above (which is mostly empty). When we move to the Windows platform, management/political considerations require me to move the IP ranges around again. We would like to keep .1 - .10 reserved for network appliances, switches, and other infrastructure. .200 will remain designated for servers. The addressing space in between should be available to clients and IPs should be dynamically allocated (Edit: instead of automatic as originally mentioned) by the server. My Address Pool on the Windows Server looks like this: 192.168.0.1 192.168.0.254 (Address range for distribution) 192.168.0.1 192.168.0.10 (IP addresses excluded from distribution) 192.168.0.200 192.168.0.254 (IP addresses excluded from distribution) Currently, we have all of our clients still on the .3 - .40 range, and a few machines still active in the .100 - .175 (although there are lots devices that are powered off that still have expired leases with IPs from that range). Since the lease "database" isn't shared between the old and new DHCP server how can I prevent clients from receiving a lease with an IP address that is currently being held by client with a non-expired lease from the old DHCP server? If I just expand the range on the Debian DHCP server to be 192.168.0.10 - 192.168.0.199 is there a way to force clients to not re-use their old IP address when they send their DHCPDISCOVER? Can I make the Windows DHCP server be authoritiative like the ISC implementation? The dhcpd.conf from the Debian server: ddns-update-style none; authoritative; default-lease-time 43200; #12 hours max-lease-time 86400; #24 hours subnet 192.168.0.0 netmask 255.255.255.0 { option routers 192.168.0.1; option subnet-mask 255.255.255.0; option broadcast-address 192.168.0.255; range 192.168.0.100 192.168.0.175; } host workstation-1 { hardware ethernet 00:11:22:33:44:55; fixed-address 192.168.0.3; } ... and so on until 192.168.0.40

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • Samba share not accessible from Win 7 - tried advice on superuser

    - by Roy Grubb
    I have an old Red Hat Linux box that I use, amongst other things, to run Samba. My Vista and remaining Win XP PC can access the p/w-protected Samba shares. I just set up a new Windows 7 64-bit Pro PC. Attempts to access the Samba shares by clicking on the Linux box's icon in 'Network' from this machine gave a Logon failure: unknown user name or bad password. message when I gave the correct credentials. So I followed the suggestions in Windows 7, connecting to Samba shares (also checked here but found LmCompatibilityLevel was already 1). This got me a little further. If click on the Linux box's icon in 'Network' from this machine I now see icons for the shared directories. But when I click on one of these, I get \\LX\share is not accessible. You might not have permission... etc. I tried making the Win 7 password the same as my Samba p/w (the user name was already the same). Same result. The Linux box does part of what I need for ecommerce - the in-house part, it's not accessible to the Internet. As my Linux Fu is weak, I have to avoid changes to the Linux box, so I'm hoping someone can tell me what to do to Win 7 to make it behave like XP and Vista when accessing this share. Help please!? Thanks Thanks for replying @Randolph. I had set 'Network security: LAN Manager authentication level' to Send LM & NTLM - use NTLMv2 session security if negotiated based on the advice in Windows 7, connecting to Samba shares and had restarted the machine, but that didn't work for me. I'll try playing with other Network security values. I have now tried the following: Network security: Allow Local System to use computer identity for NTLM: changed from Not Defined to "Enabled". Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. Network security: Restrict NTLM: Add remote server exceptions for NTLM Authentication (added LX) Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. I can't see any other Network security settings that might affect this. Any other ideas please? Thanks Roy

    Read the article

  • psybnc on nas: ncurses problem

    - by holms
    Trying to get compile psybnc on NAS. ipkg is default package manager in here. I've installed ncurses already, it's in /opt/lib (libncurses.so) [\w] # ls /opt/lib | grep ncurses libncurses.so libncurses.so.5 libncurses.so.5.7 libncursesw.so libncursesw.so.5 libncursesw.so.5.7 [\w] # file libncurses.so.5.7 libncurses.so.5.7: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked, stripped I added this path to /etc/profile [\w] # echo $PATH /bin:/sbin:/usr/bin:/usr/sbin:/opt/bin:/opt/sbin:/opt/lib So trying to make menuconfig gives me this error [\w] # make menuconfig Initializing Menu-Configuration [*] Running Conversion Tool for older psyBNC Data. Using existent configuration File. [*] Running Autoconfig. System: Linux Socket Libs: Internal. Environment: Internal. Time-Headers: in time.h and sys/time.h Byte order: Big Endian. IPv6-Support: Yes, general support. But no interface configured. async-DNS-Support: Yes. SSL-Support: No openssl found. Get openssl at www.openssl.org Creating Makefile [*] Creating Menu, please wait. This needs the ncurses library. If it is not available, menuconf wont work. If you are using curses, use make menuconfig-curses instead. make: *** [menuconfig] Error 1 Same goes for make menuconfig-curses [\w] # make menuconfig-curses Initializing Menu-Configuration using Curses [*] Running Conversion Tool for older psyBNC Data. Using existent configuration File. [*] Running Autoconfig. System: Linux Socket Libs: Internal. Environment: Internal. Time-Headers: in time.h and sys/time.h Byte order: Big Endian. IPv6-Support: Yes, general support. But no interface configured. async-DNS-Support: Yes. SSL-Support: No openssl found. Get openssl at www.openssl.org Creating Makefile [*] Creating Menu, please wait. This needs the curses library. If it is not available, menuconf wont work. make: *** [menuconfig-curses] Error 1 Psybnc compiled ok, just wanna work with menuconfig instead of configuration file.

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. I ran chkdsk again, this time: chkdsk /f /r. Produced no messages. Same result when deleting. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because these (empty) directories are still not deleteable, C:\1\2\Favorites\Wien\What To Do.. C:\1\2\Favorites\Photography\FIRE Same error as above: Here is what Explorer properties shows for both folders: Update #4 (another partial solution): Using harrymc's answer combined with thoroughly reading through this amazing MS-KB article which contains nearly everyone's idea and then some, inconspicuously titled: You cannot delete a file or a folder on an NTFS file system volume. I was able to delete the 2nd folder C:\1\2\Favorites\Photography\FIRE - the problem being that there was an invisible trailing space at the end. I got lucky when I did an auto-complete whilst playing around with the del "\\?\<path>" command which he suggested. NOTE: A normal del did NOT work, nor did deleting from explorer. Now all that is left is the first directory C:\1\2\Favorites\Wien\What To Do.. (yes I tried endlessly with multiple combinations of the above solution ;) Keep 'em coming! =)

    Read the article

  • Is Samba Server what I'm looking for, and if so, what do I need? (currently on DD-WRT Micro)

    - by Anthony
    I am really confused as to what Samba actually does and how it works. Here's what I'm hoping it does: I set up a Samba server on my LAN, and everyone will be able to see each other's shared files and swap them. But some of the documentation makes it sound like it will just allow Mac/Linux computers to see Windows computers. Other bits of the documentation make it sound more like a local server, where a Linux machine would install Samba and they would see everyone and be visible to everyone, but that won't change if anybody else can see each other. While still other things I've read make it seem more like a file-server, where everyone sees each other but file transfers are not peer-to-peer but instead need a host disk for files to act as go between. So, assuming I'm even in the right ballpark of what Samba does in terms of my goal of total cross-visibility on the network, I am left with needing to know what I'd need to set up the server and whether it can be done and is worth it... DD-WRT's article on Samba is a bit ambiguous. One second it sounds as if I can run the server on micro as long as it's set up on a usb drive, but then it also sounds like micro can't run it at all, etc. If I can run it from a usb-connected drive, I still need to know if the files are actually stored on that drive. The dd-wrt article mentions: You can run a Samba server on your main computer and run a client on your router (thus gaining writable storage for the router) or you can use Samba to share a drive connected (typically by USB) to the router among all the computers connected to your network. That one part "to share a drive...among all the computers" makes it sound like the only benefit I get from Samba is a share drive that any OS on the network can see, but they still won't see each other. But I'm very hopeful I'm misreading this. If the computers can see each other but still need the disk, how much space is generally a good idea? I'm basing this on the idea that the drive is a temporary store point. Obviously I'd have to get a drive big enough to store everything people wanted to share if the drive is a full-on file server. If I do have this all wrong, is there any software that achieves what I have in mind? Something that connects to the main router to bridge all clients?

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • Rsync over ssh: "ERROR: module is read only" suddenly appeared

    - by user978548
    I've used from some time rsync/ssh to backup my shared host contents to my personal Synology NAS (212j for that matter), and it worked quite well. For information, I use a password-less ssh connection. 3 days ago, I updated my NAS software and since (or at least I believe it's since that), the backup won't work anymore. I get the following error on the host: rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]: Broken pipe (32) ERROR: module is read only ..which I do not understand. beside that nothing changed that I know of in both source and destination that can be related to rsync or ssh, I did check a few things and all seems to be alright: I can still connect through ssh from the host to my NAS with the good user, so ssh stuff like keys haven't changed. I also have the correct file permissions on the NAS (I checked, and also tried to create files, directories, .. with the user used by rsync through ssh). I read here and there that the error means that I have to ensure that my rsyncd.conf have the right read only = no in it, but as far as I know, I never used rsyncd as well as I never configured anything for it and until now it worked like a charm.. I use the following command to do the backup: rsync -ab --recursive \ --files-from="$FILES_FROM" \ --backup-dir=backup_$SUFFIX \ --delete \ --filter='protect backup_*' \ $WDIRECTORY/ \ remote_backup:$REMOTE_BACKUP/ So I'm stuck and really can't figure out what happened. Edit: As suggested in comments, I also tried passing commands to ssh (but not from inside a ssh session), that worked as expected, and also tried a single rsync command, which didnt worked, failing just like the complete backup command. (sharedHost):hostuser:~ > touch test.txt (sharedHost):hostuser:~ > rsync test.txt remote_backup:backups/test.txt ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.8] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7] and (sharedHost):hostuser:~ > ssh remote_backup 'touch /abs_path_to_backups/backups/test2.txt && echo "ProoF" > /abs_path_to_backups/backups/test2.txt' (sharedHost):hostuser:~ > ssh remote_backup 'cat /abs_path_to_backups/backups/test2.txt' ProoF

    Read the article

  • e2fsck / resize2fs problems

    - by BlakBat
    I've got 6 drives (each 1.5T, all same model and firmware revision) that are part of a RAID5 array. The RAID5 makes a LVM volume group and a logical group. The latter contains only one ext3 partition. I've recently ran: e2fsck -f /dev/vg03/lv01 && resize2fs -M /dev/vg03/lv01 which exited without an error. Now when I try to mount /dev/vg03/lv01 I get: EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 30533 not in group (block 1000532368)! EXT3-fs: group descriptors corrupted! How do I get out of this predicament? This is all the info I can currently give you: fdisk -l /dev/sd[cdefgh] shows (correctly) that they are "Linux raid autodetect" but fdisk now shows: fdisk -l /dev/md0 Disk /dev/md0: 7501.5 GB, 7501495664640 bytes ... Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table (instead of a LVM type partition) fdisk -l /dev/vg03/lv01 Disk /dev/vg03/lv01: 7501.5 GB, 7501491732480 bytes ... Disk identifier: 0x00000000 Disk /dev/vg03/lv01 doesn't contain a valid partition table (instead of a ext3 type partition) I've tried: e2fsck -fy /dev/vg03/lv01 e2fsck 1.41.12 (17-May-2010) e2fsck: Group descriptors look bad... trying backup blocks... Block bitmap for group 30533 is not in group. (block 1000532368) Relocate? yes Inode bitmap for group 30533 is not in group. (block 1000532369) Relocate? yes Pass 1: Checking inodes, blocks, and sizes Relocating group 30533's block bitmap to 1000524246... Error allocating 1 contiguous block(s) in block group 30533 for inode bitmap: Could not allocate block in ext2 filesystem e2fsck: aborted Extra information I can give you: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sdg1[0] sdh1[5] sdf1[4] sde1[3] sdc1[2] sdd1[1] 7325679360 blocks level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 1/175 pages [4KB], 4096KB chunk unused devices: Lastly, all smartctl tests (short and extendend) showed no errors on any of the disks. Should I try to resize2fs to grow /dev/vg03/lv01 and redo a e2fsck ? Should I cfdisk /dev/md0 and /dev/vg03/lv01 back to their real types? Thanks in advance for all and any help. 2011-09-20 UPDATE I issued the following commands and was able to remount the partition, but by viewing the size (df) of before and after, it seems that 1Tb of data have gone missing. By checking the MD5SUMS (from an old backup) of some files with the "same" files from the remounted partition, some errors have been detected. Commands issued to remount the partition were: dumpe2fs /dev/vg03/lv01 Block count: 1000491435<br /> Block size: 4096<br /> tune2fs -O ^has_journal /dev/vg03/lv01 resize2fs -p /dev/vg03/lv01 dumpe2fs /dev/vg03/lv01 Block count: 1831418880<br /> Block size: 4096<br /> mount -o ro,noatime /dev/vg03/lv01 /mnt/raid OK... but files have been damaged / gone missing.

    Read the article

  • Sharing folder in a Virtual Private Windows Server 2008 R2 ?

    - by Triztian
    See Edit 2: Hello all, seems my involvement with computers has grown and I've found my self in the need to access a shared folder on a server. I've read some documentation and managed to set up the folder as a share, for this I created a local group and for now just one local user that has access to the share, the folder is in the public user folder and it's permissions should be (and I believe they are) read/write. The problem is that I can't connect from a remote machine I mean I don't know how the way it should be accessed, the server has a public IP and we use it also as a host to our website I don't know if that affects it though, the folder will be used as the "keeper" for the QuickBooks company files and has the database server manager installed. I've tried setting up a VPN Connection to the but no success. The server has a domain name a "http://www.example.com" that redirects to our website, I am unsure if it could be accessed that way, also the share has a location displayed when I right-click properties Heres what I've tried Setting up a VPN Connection (Windows Vista and 7) Got to the point where I got asked for credential and entered the user I created (which is not an admin) but I got a "Connection fail error 800" I suppose this is because in the domain field I entered the servers workgroup. right-click add network connection (Windows 7) Went through the wizard until I reached the point of entering the location, tried many things, the name in the share's properties(\\SOMETHING\Share), the http://www.example.com , the IP address I'm quite unfamiliar with this, so I have my guesses: Since the group and user are local they do not have access to the folder. The firewall in the server is blocking my connection. Anyways, any help and guidence is truly appreciated. EDIT 1: As @tony roth pointed out it may be a security fail, an I commented it out to management and said that that is not an issue, so please bare with me. EDIT 2: I've found out that the real question could be streamlined to "Sharing folder in a Virtual Private Server?", as thats what we have, a virtual private windows server 2008 R2, and I would like to know how to make it show like a normal folder in the client computer. Thanks again for all of your support.

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Innodb Queries Slow

    - by user105196
    I have redHat 5.3 (Tikanga) with Mysql 5.0.86 configued with RIAD 10 HW, I run an application inquiries from Mysql/InnoDB and MyIsam tables, the queries are super fast,but some quires on Innodb tables sometime slow down and took more than 1-3 seconds to run and these queries are simple and optimized, this problem occurred just on innodb tables in different time with random queries. Why is this happening only to Innodb tables? the below is the Innodb status and some Mysql variables: show innodb status\G ************* 1. row ************* Status: 120325 10:54:08 INNODB MONITOR OUTPUT Per second averages calculated from the last 19 seconds SEMAPHORES OS WAIT ARRAY INFO: reservation count 22943, signal count 22947 Mutex spin waits 0, rounds 561745, OS waits 7664 RW-shared spins 24427, OS waits 12201; RW-excl spins 1461, OS waits 1277 TRANSACTIONS Trx id counter 0 119069326 Purge done for trx's n:o < 0 119069326 undo n:o < 0 0 History list length 41 Total number of lock structs in row lock hash table 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, process no 29093, OS thread id 1166043456 MySQL thread id 703985, query id 5807220 localhost root show innodb status FILE I/O I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (write thread) Pending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 132777 OS file reads, 689086 OS file writes, 252010 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s INSERT BUFFER AND ADAPTIVE HASH INDEX Ibuf: size 1, free list len 366, seg size 368, 62237 inserts, 62237 merged recs, 52881 merges Hash table size 8850487, used cells 3698960, node heap has 7061 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s LOG Log sequence number 15 3415398745 Log flushed up to 15 3415398745 Last checkpoint at 15 3415398745 0 pending log writes, 0 pending chkp writes 218214 log i/o's done, 0.00 log i/o's/second BUFFER POOL AND MEMORY Total memory allocated 4798817080; in additional pool allocated 12342784 Buffer pool size 262144 Free buffers 101603 Database pages 153480 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages read 151954, created 1526, written 494505 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout ROW OPERATIONS 0 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread process no. 29093, id 1162049856, state: waiting for server activity Number of rows inserted 77675, updated 85439, deleted 0, read 14377072495 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s END OF INNODB MONITOR OUTPUT 1 row in set, 1 warning (0.02 sec) read_buffer_size = 128M sort_buffer_size = 256M tmp_table_size = 1024M innodb_additional_mem_pool_size = 20M innodb_log_file_size=10M innodb_lock_wait_timeout=100 innodb_buffer_pool_size=4G join_buffer_size = 128M key_buffer_size = 1G can any one help me ?

    Read the article

  • How do you splice out a part of an xvid encoded avi file, with ffmpeg? (no problems with other files

    - by yegor
    Im using the following command, which works for most files, except what seems to be xvid encoded ones /usr/bin/ffmpeg -sameq -i file.avi -ss 00:01:00 -t 00:00:30 -ac 2 -r 25 -copyts output.avi So this should basically splice out 30 seconds of video + audio, starting from 1 minute mark. It does START encoding at the 00:01:00 mark but it goes all the way to the end of the file for some reason, ignoring that I want just 30 seconds. The output looks like this. FFmpeg version git-ecc4bdd, Copyright (c) 2000-2010 the FFmpeg developers built on May 31 2010 04:52:24 with gcc 4.4.3 20100127 (Red Hat 4.4.3-4) configuration: --enable-libx264 --enable-libxvid --enable-libmp3lame --enable-libopenjpeg --enable-libfaac --enable-libvorbis --enable-gpl --enable-nonfree --enable-libxvid --enable-pthreads --enable-libfaad --extra-cflags=-fPIC --enable-postproc --enable-libtheora --enable-libvorbis --enable-shared libavutil 50.15. 2 / 50.15. 2 libavcodec 52.67. 0 / 52.67. 0 libavformat 52.62. 0 / 52.62. 0 libavdevice 52. 2. 0 / 52. 2. 0 libavfilter 1.20. 0 / 1.20. 0 libswscale 0.10. 0 / 0.10. 0 libpostproc 51. 2. 0 / 51. 2. 0 [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'file.avi': Metadata: ISFT : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:02:00.00, start: 0.000000, bitrate: 1587 kb/s Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: ac3, 48000 Hz, 5.1, s16, 448 kb/s File 'lol6.avi' already exists. Overwrite ? [y/N] y Output #0, avi, to 'lol6.avi': Metadata: ISFT : Lavf52.62.0 Stream #0.0: Video: mpeg4, yuv420p, 672x368 [PAR 1:1 DAR 42:23], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, 2 channels, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding [mpeg4 @ 0x17cf770]Invalid and inefficient vfw-avi packed B frames detected [buffer @ 0x184b610]Buffering several frames is not supported. Please consume all available frames before adding a new one. frame= 1501 fps=104 q=0.0 Lsize= 15612kB time=30.02 bitrate=4259.7kbits/s ts/s video:15303kB audio:235kB global headers:0kB muxing overhead 0.482620% if I convert this file to mp4 for example, and then perform the same action, it works perfectly.

    Read the article

  • Managing multiple independant domains with Google Apps

    - by Saif Bechan
    I am currently running a server where I have multiple domains with all of them running there own mail server. My plan is to outsource this whole email service and have Google, or competitor, do this for me. Let me start by telling you the setup I have now and want to migrate to Google. Initial setup I have a main domain where I run my server, and my nameserver. This is an important domain because this holds the connection with all my internal applications. For example log messages, cronjob messages, and virus-scan messages are sent to this domain. This email is also registered at my registrar and I use it to communicate with my ISP. Next I run a few independent websites that all need their independent email addresses. This can be on shared space, I don't mind. 1 Gig will be enough for everything I am going to do. Summary: superdomain.com (which only has a catchall for internal use and communication with my ISP) cars.com (independent) flowers.com (independent) foods.com (independent) I am going to be the admin for all of this. The independent domains don't need there own admin panel, they just need email addresses like info@ support@, etc. I do all the managing and they just send and receive emails using the accounts i give them. All of the websites have there different staff that use the accounts. Tried so far I have registered my superdomain, but I can only add aliases to the main domain. If I make all the other domains aliases the emails from [email protected] and [email protected] will have the same inbox. I want them to be separate. is the only way to achieve this by creating an account for each domain? And if so, is there no way of creating a superdomain account where I can edit all these accounts easily without having to log in 4 different places to get my work done. I have searched the Google help forums, and posted questions but without any results so far. Questions Can anyone please give me some advice on what to do. I currently use the free program Google has.

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Keep source IP after NAT

    - by John Miller
    Until today I used a cheapy router so I can share my internet connection and keep a webserver online too, while using NAT. Users IP ($_SERVER['REMOTE_ADDR']) was fine, I was seeing class A IPs of users. But as traffic grown up everyday, I had to install a Linux Server (Debian) to share my Internet Connection, because my old router couldn't keep the traffic anymore. I shared the internet via IPTABLES using NAT, but now, after forwarding port 80 to my webserver, now instead of seeing real users IP, I see my Gateway IP (Linux Internal IP) as any user IP Address. How to solve this issue? I edited my post, so I can paste the rules I'm currently using. #!/bin/sh #I made a script to set the rules #I flush everything here. iptables --flush iptables --table nat --flush iptables --delete-chain iptables --table nat --delete-chain iptables -F iptables -X # I drop everything as a general rule, but this is disabled under testing # iptables -P INPUT DROP # iptables -P OUTPUT DROP # these are the loopback rules iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # here I set the SSH port rules, so I can connect to my server iptables -A INPUT -p tcp --sport 513:65535 --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT # These are the forwards for 80 port iptables -t nat -A PREROUTING -p tcp -s 0/0 -d xx.xx.xx.xx --dport 80 -j DNAT --to 192.168.42.3:80 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p tcp -s 192.168.42.3 --sport 80 -j ACCEPT # These are the forwards for bind/dns iptables -t nat -A PREROUTING -p udp -s 0/0 -d xx.xx.xx.xx --dport 53 -j DNAT --to 192.168.42.3:53 iptables -t nat -A POSTROUTING -o eth0 -d xx.xx.xx.xx -j SNAT --to-source 192.168.42.3 iptables -A FORWARD -p udp -s 192.168.42.3 --sport 53 -j ACCEPT # And these are the rules so I can share my internet connection iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0:1 -j ACCEPT If I delete the MASQUERADE part, I see my real IP while echoing it with PHP, but I don't have internet. How to do, to have internet and see my real IP while ports are forwarded too? ** xx.xx.xx.xx - is my public IP. I hid it for security reasons.

    Read the article

  • My network drive disappears from Mac OS Finder

    - by Mariusz
    I have recently bought a Netgear WNDR3800 router to use it in my home network. But just the same day I installed it, I noticed a strange behaviour of Finder and iTunes. Let me explain it further. There is a Synology DS111 NAS attached to that router and two Macs with Mac OS X Lion. One of them is connected by a cable and the second one wirelessly. Before I changed my router to the new one I mentioned above, Finder always used to display my NAS on its sidebar. So I could just click its network name to access shared folders existing on it. But after I installed WNDR3800, I can no longer access the NAS that way. It is no longer displayed. I always have to mount it manually by typing its IP address using the Finder's 'connect to server' option. The same NAS supports TimeMachine backups and has an inbuilt DLNA server. And the same situation here. I can't perform a backup because my NAS is no longer accessible in TimeMachine preferences. iTunes does not display it as well (as a multimedia server) even though it used to before I installed that router. What's important, everything works fine for a couple of minutes after I restart the router or the NAS. Or even when I change the NAS's IP address it becomes accessible again in Finder, TimeMachine and iTunes, but only for some time. Both the Mac computers I mentioned behave the same way. And all those issues have been taking place sice I installed that new router. Before I did that, everything had worked fine. My old router was Netgear WGR614v10. Would you be so kind to tell me what you think could possibly be the reason of that behaviour? What settings of the router should I look closer at? I'm not a network specialist, but is it possible that some network packets are blocked for some reason? I will be grateful for any clues you give me. Thank you.

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >