Search Results

Search found 5175 results on 207 pages for 'red soft adair stefanwoe'.

Page 105/207 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Centos running Apache Tomcat keep getting "java.net.SocketException: Too many open files"

    - by Gerard Moroney
    We're running Apache Tomcat 7.0.41 on CentOS 6 with java version "1.7.0_21". We were getting a lot of too many open files errors so I did some research. The consensus was that it was to to with the number of open files. So I did the following: Increased max files in /etc/security/limits.conf soft nofile 100000 hard nofile 100000 Rebooted the server Checked the limits were valid for the user which was to run the process [app_admin@xxx ~]$ ulimit -Hn 100000 [app_admin@xxx ~]$ ulimit -Sn 100000 Monitored open files on the server using the lsof command What I observed was when the total open files reached circa 13000 and tomcat had around 4500 open files the error reappeared. I am confused. I thought it would have resolved the problem but clearly I don't fully understand the root cause and also how to set the parameter correctly. To (maybe) help I have not modified the server.xml file for Tomcat (although I'm tempted). I don't want to start fiddling with that and make things worse. I'm more than happy to share any more information if someone can give me some hints on where to start looking.

    Read the article

  • router only assigns small number of IPs

    - by Liam Coates
    Been having a problem with my router for a while now, might just be because it is really old but here's the problem: If a lot of computers are connected to my home network someone will get disconnected. They are assigned IPs and it seems like at a certain point (and I don't know how many) you either get assigned the same IP as someone else or something else is happening and you get disconnected - until i soft reset it and it works again which takes 30 secs. I'd say my tablet, my PC, my sisters iPad, 2 laptops and a netbook is the most that can be connected at one time so that is 6 but that should be fine. The only way I know this is the problem is because I turned on my tablet and I was online on my PC, got disconnected but my tablet was still connected, this is just after i turned the tablet on so I know my router is having difficulty with IPs, it is like it assigned the same IP to the tablet which then clashed with my desktop and knocked me off. I see that sometimes the following solves it as well so I wrote a batch file with a menu to execute these commands as I have to do it so often. ipconfig /release ipconfig /flushdns ipconfig /renew Any ideas? Or shall I just get a new router as this one is old and maybe can't handle giving out that many IPs? Cheers!

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by indyvoyage
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • "Possible SYN flooding" in log despite low number of SYN_RECV connections

    - by al4
    Recently we had an apache server which was responding very slowly due to SYN flooding. The workaround for this was to enable tcp_syncookies (net.ipv4.tcp_syncookies=1 in /etc/sysctl.conf). I posted a question about this here if you want more background. After enabling syncookies we started seeing the following message in /var/log/messages approximately every 60 seconds: [84440.731929] possible SYN flooding on port 80. Sending cookies. Vinko Vrsalovic informed me that this means the syn backlog is getting full, so I raised tcp_max_syn_backlog to 4096. At some point I also lowered tcp_synack_retries to 3 (down from the default of 5) by issuing sysctl -w net.ipv4.tcp_synack_retries=3. After doing this, the frequency seemed to drop, with the interval of the messages varying between roughly 60 and 180 seconds. Next I issued sysctl -w net.ipv4.tcp_max_syn_backlog=65536, but am still getting the message in the log. Throughout all this I've been watching the number of connections in SYN_RECV state (by running watch --interval=5 'netstat -tuna |grep "SYN_RECV"|wc -l'), and it never goes higher than about 240, much much lower than the size of the backlog. Yet I have a Red Hat server which hovers around 512 (limit on this server is the default of 1024). Are there any other tcp settings which would limit the size of the backlog or am I barking up the wrong tree? Should the number of SYN_RECV connections in netstat -tuna correlate to the size of the backlog? Update As best I can tell I'm dealing with legitimate connections here, netstat -tuna|wc -l hovers around 5000. I've been researching this today and found this post from a last.fm employee, which has been rather useful. I've also discovered that the tcp_max_syn_backlog has no effect when syncookies are enabled (as per this link) So as a next step I set the following in sysctl.conf: net.ipv4.tcp_syn_retries = 3 # default=5 net.ipv4.tcp_synack_retries = 3 # default=5 net.ipv4.tcp_max_syn_backlog = 65536 # default=1024 net.core.wmem_max = 8388608 # default=124928 net.core.rmem_max = 8388608 # default=131071 net.core.somaxconn = 512 # default = 128 net.core.optmem_max = 81920 # default = 20480 I then setup my response time test, ran sysctl -p and disabled syncookies by sysctl -w net.ipv4.tcp_syncookies=0. After doing this the number of connections in the SYN_RECV state still remained around 220-250, but connections were starting to delay again. Once I noticed these delays I re-enabled syncookies and the delays stopped. I believe what I was seeing was still an improvement from the initial state, however some requests were still delayed which is much worse than having syncookies enabled. So it looks like I'm stuck with them enabled until we can get some more servers online to cope with the load. Even then, I'm not sure I see a valid reason to disable them again as they're only sent (apparently) when the server's buffers get full. But the syn backlog doesn't appear to be full with only ~250 connections in the SYN_RECV state! Is it possible that the SYN flooding message is a red herring and it's something other than the syn_backlog that's filling up? If anyone has any other tuning options I haven't tried yet I'd be more than happy to try them out, but I'm starting to wonder if the syn_backlog setting isn't being applied properly for some reason.

    Read the article

  • How do I set up a Windows NFS share so that I can view it's contents on Linux?

    - by hewhocutsdown
    My NFS server is a Windows XP SP3 box with the Microsoft Windows Services for Unix installed. I have a share configured under C:\NFS with the share name NFS and ANSI encoding. Anonymous access is enabled, with the anon UID/GID set to 0/0. Additionally, I've set ALL MACHINES to Read-Write, and checked the checkbox to Allow root access. My first NFS client is a Ubuntu 10.04 box, with nfs-common installed. Running sudo mount -t nfs 1.1.1.1:/NFS /home/user/NFS succeeds, but when I attempt to view the folder (even as root), it tells me that I do not have the permissions necessary to view the contents of the folder. My second NFS client is an IBM iSeries box running OS/400 V5R3. I used the mount command below: MOUNT TYPE(*NFS) MFS('1.1.1.1:/NFS') MNTOVRDIR('/PARENT/NFS') OPTIONS('rw,nosuid,retry=5,rsize=8096,wsize=8096,timeo=20,retrans=2,acregmin=30,acregmax=60,acdirmin=30,acdirmax=60,soft') CODEPAGE(*BINARY *ASCII) which also mounts successfully. Attempting to WRKLNK '/PARENT/NFS' and use Option 5 to enter the directory yields a Not authorized to object error - even though I am a security officer with the *ALLOBJ special authority. My gut says that it's a problem with the Windows share, but I don't know what it could be. Do you have any suggestions?

    Read the article

  • Intermittent NFS lockups on Isilon cluster

    - by blackbox222
    We have an Isilon cluster with 8 IQ 12000x nodes which exports storage via several NFS shares for a handful of Linux and Solaris clients. There is a Linux system that has one of these NFS filesystems mounted. I/O to this filesystem is moderately heavy from the Linux system. Every 3-4 weeks (it's not on any kind of discernible schedule, and sometimes is more/less frequent than this), we notice that all activity ceases on this NFS mount (the process hangs, as if the network stopped working so process is stuck in uninterruptible sleep) - 30 minutes later, the share recovers and things continue to work normally. The kernel log from the affected machine is as follows: Dec 3 10:07:29 redacted kernel: [8710020.871993] nfs: server nfs-redacted not responding, still trying Dec 3 10:37:17 redacted kernel: [8711805.966130] nfs: server nfs-redacted OK relevant /etc/fstab line: nfs-redacted:/ifs/nfs/export_data/shared/...redacted... /data nfs defaults 0 0 I've checked to see if there are any scheduled processes e.g. cron jobs, Isilon related functions e.g. snapshots, etc that might be causing these hangups but I can't seem to find anything. I'm also not aware of any network related issues or maintenance that would cause this. All of the lockups last almost exactly 30 minutes per the kernel logs. Perhaps someone has some suggestions I could try? (I considered a soft mount to avoid the problems associated with processes accessing the filesystem hanging; however am wary of the corruption that could result and it would not really solve the underlying issue anyway).

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • How to allow writing to a mounted NFS partition

    - by Cerin
    How do you allow a specific user permission to write to an NFS partition? I've mounted an NFS share on my localhost (a Fedora install), and I can read and write as root, but I'm unable to write as the apache user, even though all the files and directories in the share on my localhost and remote host are owned by apache. For example, I've mounted it via this line in my /etc/fstab: remotehost:/data/media /data/media nfs _netdev,soft,intr,rw,bg 0 0 And both locations are owned by apache: [root@remotehost ~]# ls -la /data total 24 drwxr-xr-x. 6 root root 4096 Jan 6 2011 . dr-xr-xr-x. 28 root root 4096 Oct 31 2011 .. drwxr-xr-x 4 apache apache 4096 Jan 14 2011 media [root@localhost ~]# ls -la /data total 16 drwxr-xr-x 4 apache apache 4096 Dec 7 2011 . dr-xr-xr-x. 27 root root 4096 Jun 11 15:51 .. drwxrwxrwx 5 apache apache 4096 Jan 31 2011 media However, when I try and write as the apache user, I get a "Permission denied" error. [root@localhost ~]# sudo -u apache touch /data/media/test.txt' touch: cannot touch `/data/media/test.txt': Permission denied But of course it works fine as root. What am I doing wrong?

    Read the article

  • How do I repair my Logitech Anywhere MX?

    - by Stefano Palazzo
    My Anywhere Mouse has got mushy mouse button syndrome. That is, the left mouse button feels a little bit soft, and it easily double clicks, let's go when I drag something. Before I repair it at home, rather than bringing it to the store (I kind of need it, it's the only one I have), I'd like to know exactly what I'm doing. It'd be too bad if I tried to repair it, voided the warranty and didn't succeed. I'm guessing there are screws to open it under the rubber pads. And I suppose I can take those off without breaking them, and put them back on without bending them. How is this mouse held together, and what's the safest way to open it? Once I have it open, will I be able to fix the problem? What's causing the mushy mouse button? Here's what I know so far: It might be the switch itself that's broken, in which case I shouldn't open it (I can't get a replacement, voiding the warranty to "have a look" seems pointless) If there are screws underneath the rubber pads, they're only on the 'front', the back two thirds of the mouse are all battery cover: There's nothing I can see under the batteries either. In the mouse I had before this one, there were sort of springy things connecting the actual button with the switch soldered to the board. They were just lying inside of a bit of plastic, and I could swap the left and right ones easily. If repairing it is more difficult, transferring the problem to the right mouse button would be a very good start.

    Read the article

  • How to use more than 3 virtual disks in Linux using CentOS and XenServer

    - by 010110110101
    I've attached 5 virtual disks to a Virtual Machine in Citrix XenServer. The VM has the xs-tools installed. Initially it said that it couldn't add so many disks. After I installed the xs-tools, it let me add all the disks. But /dev doesn't show all the disks. It shows these: /dev/xvda /dev/xvdb /dev/xvdc /dev/cdrom Perhaps it is bound by the limits of an IDE bus? (3 disks + CD-ROM) If so, how does one change the VM to use SCSI? Edit: According to the documentation: 2.6.3. VM Block Devices In the PV Linux case, block devices are passed through as PV devices. XenServer does not attempt to emulate SCSI or IDE, but instead provides a more suitable interface in the virtual environment in the form of xvd* devices. It is also possible to get an sd* device using the same mechanism, where the PV driver inside the VM takes over the SCSI device namespace. This is not desirable so it is best to use xvd* where possible for PV guests (this is the default for Debian and RHEL). For Windows or other fully virtualized guests, XenServer emulates an IDE bus in the form of an hd* device. When using Windows, installing the Citrix Tools for Virtual Machines installs a special PV driver that works in a similar way to Linux, except in the fully virtualized environment. Still, with 5 virtual disks attached, I don't see the other xvd devices. Edit #2: (attached requested info) Host Machine: XenServer 6.1 Linux version 2.6.32.43-0.4.1.xs1.6.10.777.170770xen (geeko@buildhost) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Wed Apr 17 05:52:03 EDT 2013 Guest Machine: CentOS release 6.4 (Final) Linux version 2.6.32-358.6.2.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 20:59:36 UTC 2013 Output of 'fdisk -l' on Guest Machine: Note, the disk beyond the first 3 attached are not displaying -- there should be 4 100GB disks. (There are a total of 5 disks displayed in XenCenter -- 16GB, 100GB, 100GB, 100GB, 100GB) Disk /dev/xvdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfb6c95b9 Device Boot Start End Blocks Id System /dev/xvdb1 1 13054 104856223+ 83 Linux Disk /dev/xvda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e5f41 Device Boot Start End Blocks Id System /dev/xvda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/xvda2 64 2089 16264192 8e Linux LVM Disk /dev/xvdc: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xed249ced Device Boot Start End Blocks Id System /dev/xvdc1 1 13054 104856223+ 83 Linux Disk /dev/mapper/vg_blue-lv_root: 14.6 GB, 14571012096 bytes 255 heads, 63 sectors/track, 1771 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_blue-lv_swap: 2080 MB, 2080374784 bytes 255 heads, 63 sectors/track, 252 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 I see that the Linux versions say SMP. The Guest VM doesn't say "xen" in the name. However, I have already run yum install kernel-xen. Could be a clue?

    Read the article

  • Win2008: Boot from mirrored dynamic disk fails!

    - by Daniel Marschall
    Hello. I am using Windows Server 2008 R2 Datacenter and I got two 1.5TB S-ATA2 hard disks installed and I want to make a soft raid. (I do know the disadvantages of softraid vs. hardraid) I have following partitions on Disk 0: (1) Microsoft Reserved 100 MB (dynamic), created during setup (2) System Partition 100 GB (dynamic) (3) Data partition, 1.2TB (dynamic) I already mirrored these contents to Disk 1. Its contents are: (1) System partition mirror, 100 GB (dynamic) (2) Data partition, 1.2 TB mirror (dynamic) (3) Unusued 100 MB (dynamic) -- is from "MSR" of Disk 0, created during setup. Since data and system partition are mirrored, I expect that my system works if disk 0 would fail. But it doesn't. If I force booting on disk 0: Works (I get the 2 bootloader screen) If I force booting on disk 1 (F8 for BBS), nothing happens. I got a blank black screen with the blinking caret. I already made disk1/partition1 active with diskpart, but it still does not boot from this drive. Please help. Both partitions are in "MBR" partition style. They look equal, except the missing "MSR" partition at the partition beginning (which seems to be not relevant to booting). Regards Daniel Marschall

    Read the article

  • NFS confusion - writing many small files

    - by Antonis Christofides
    I have a Debian squeeze amd64 which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 176.9.116.102:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS directory goes at more than 100 MB/s.) I am confused by the description of async in NFS. If my application accesses the local directory, system calls like write and close return even if caches have not been flushed to permanent storage. Apparently this is not true with NFS sync behaviour. However, with NFS async behaviour, even calls like fsync are ignored. Isn't it possible to work like local files, i.e. generally work asynchronously, but honour fsync and O_SYNC?

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • Mouse clicks suddenly stopped working in Ubuntu

    - by DisgruntledGoat
    This is a weird one. For some reason, last night my mouse partially stopped working. Movement is fine, but the mouse buttons don't work. Mainly it's the left button, but occasionally the right click and scroll-wheel fail too. Initially I thought it could be the mouse itself (the left button seemed to get a bit "soft" recently), but I tried another mouse and had the same issue. Both are USB wireless optical mice. The keyboard is working okay 95%, only problem is Alt+Tab doesn't seem to work. Both keys work fine independently. At the time it happened I was using Chrome, I dragged to scrollbar to scroll and when I released the mouse it was still holding scrollbar. I'm using Ubuntu 9.10, I upgraded weeks ago and everything was working fine so I don't think it's related to that. I also hadn't run any updates (I have now just in case something fixed it). But no luck. Any ideas?

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • HTC Diamond Touch sync problem

    - by Anders
    I have a HTC Diamond Touch with all my contacts etc. on it. Did however not use it for 6mo while being abroad. When I start the phone now I realize that the touch screen has stopped working. I have tried restarting, soft resetting, shutting it off etc but the touch just wont follow commands. However, I can manage the phone by buttons so it's not frozen. Hence I can get into the phone and watch contacts but not use it to call etc. The problem is, how do I get my 300 contacts out of the thing!? When I'm plugging in the phone, it lets me choose between "Sync with Outlook" and "Use as storage device". It automatically selects "Use as storage device". Now, I cannot choose to sync it with the buttons. I can not change this option afterwards either. In short, I have a phone with all of my contact data and am completely unable to get that out of it. Any tips/help/suggestions? If possible, preferably one that does not including sending the phone to a hardware workshop for three weeks in order to get it fixed:)

    Read the article

  • Ubuntu network card problem.

    - by Steve Greene
    Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, it works fine under Windows Vista. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System.

    Read the article

  • Linux arp cache timeout values

    - by Jak
    I'm trying to configure sane values for the Linux kernel arp cache timeout, but I can't find a detailed explanation as to how they work anywhere. Even the kernel.org documentation doesn't give a good explanation, I can only find recommended values to alleviate overflow. Here is an example of the values I have: net.ipv4.neigh.default.gc_thresh1 = 128 net.ipv4.neigh.default.gc_thresh2 = 512 net.ipv4.neigh.default.gc_thresh3 = 1024 Now, from what I've gathered so far: gc_thresh1 is the number of arp entries allowed before the garbage collector starts removing any entries at all. gc_thresh2 is the soft-limit, which is the number of entries allowed before the garbage collector actively removes arp entries. gc_thresh3 is the hard limit, where entries above this number are aggressively removed. Now, if I understand correctly, if the number of arp entries goes beyond gc_thresh1 but remains below gc_thresh2, the excess will be removed periodically with an interval set by gc_interval. My question is, if the number of entries goes beyond gc_thresh2 but below gc_thresh3, or if the number goes beyond gc_thresh3, how are the entries removed? In other words, what does "actively" and "aggressively" removed mean exactly? I assume it means they are removed more frequently than what is defined in gc_interval, but I can't find by how much.

    Read the article

  • Wireless card overheating?

    - by Sidney
    Ok, so I've had my laptop for several years (I wanna say 4, but possibly more), it's a Toshiba Satellite. I'm running Linux mint 15, and am having a strange new issue, after several hours of running my wireless stops. It can SEE wireless networks, but refuses to connect to any of them. (On a sidenote, connecting to a router with a cable at this point works fine) The fact that it can SEE the networks make me think the card is in good condition, and it's software related The fact that it works for several hours before booting me makes me think perhaps the transmitter is getting too hot. I don't use my laptop in dusty environments, and keep it on an elevated surface (alternatively, I actively try not to let it sit on soft surfaces where the vents get covered). I spray out the cpu fan about once a year with compressed air about once a year, so I really don't think the insides should be too dirty. Finally, unfortunately, sensors only gives me CPU temps, but they run about 40-50 degrees C, which from my understanding is perfectly normal for an I3. Does anyone have any suggestions on what I can do to determine the root cause of this?

    Read the article

  • Imac g5 with no OS nor CD drive

    - by sinekonata
    What I want: Ubuntu on a g5 Imac. What I have: An empty PC (Intel g5 17" Imac) with broken CD drive. Its model is A1173. This PC with Ubuntu 12.04 and an old Vista partition. a usb flash drive. Problems: No CD means the only boot Drive I could use is USB. There are no BIOS on Macs so I can't set boot settings or even see if it detects my USB drive. When I start the machine and press ALT the first and only thing I see is an old corrupted winXP partition and not a single option or additional information. So assuming blindly that the Mac hardware/firmware works normally, I don't have any Mac OS to use any of the tools that I found on different tutorials for building a bootable .img drive for macs. I can't find much software on Linux/Windows to substitute to those tools, for example among others converting an .iso file (win/linux) to .img (mac I guess). Which makes me think that the scenario where someone like me has Mac hardware but no Mac OS is extremely rare. So other than finding someone that has a Mac I have no solution. So I ask what would you do? the only thing is it should not involve any money (I know mac soft is rarely free) which also excludes getting any MacOS unless I can use a free macos.img for VM or restore the original Mac for free. Thank you

    Read the article

  • 403 Forbiden on Apache (CentOS) Server

    - by pouya
    These are my VM setup: HOST: windows 7 ultimate 32bit GUEST: CentOs 6.3 i386 Virtualization soft: Oracle virtualBox 4.1.22 Networking: NAT -> (PORT FORWARD: HOST:8080 => GUEST:80) Shared Folder: centos all the project files goes into shared folder and for each project file a virtualhost conf file is created in /etc/httpd/conf.d/ like /etc/httpd/conf.d/$domain I wasn't able to see anything in my browser before disabling both windows firewall and iptables in centos after that if i type for example: http://www.$domain:8080/ all i see is: Forbidden You don't have permission to access / on this server. Apache/2.2.15 (CentOS) Server at www.$domain.com Port 8080 A sample Virtual Host conf file: <VirtualHost *:80> #General DocumentRoot /media/sf_centos/path/to/public_html ServerAdmin webmaster@$domain ServerName www.$domain ServerAlias $domain *.$domain #Logging ErrorLog /var/log/httpd/$domain-error.log CustomLog /var/log/httpd/$domain-access.log combined #mod rewrite RewriteEngine On RewriteLog /var/log/httpd/$domain-rewrite.log RewriteLogLevel 0 </VirtualHost> centos shared folder is availabe to guest at /media/sf_centos These are file permissons for sf_centos: drwxrwx--- root vboxsf vboxsf group includes: apache and root So these are my questions: 1- How to solve Forbidden Problem? 2- How to setup both host and guest firewalls? 3- How can i improve this developement environment to simulate production environment as much as possible specially security improvements?

    Read the article

  • Linux iptables / conntrack performance issue

    - by tim
    I have a test-setup in the lab with 4 machines: 2 old P4 machines (t1, t2) 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t3) Intel e1000 1 Xeon 5420 DP 2.5 GHz 8 GB RAM (t4) Intel e1000 to test linux firewall performance since we got bitten by a number of syn-flood attacks in the last months. All machines run Ubuntu 12.04 64bit. t1, t2, t3 are interconnected through an 1GB/s switch, t4 is connected to t3 via an extra interface. So t3 simulates the firewall, t4 is the target, t1,t2 play the attackers generating a packetstorm thorugh (192.168.4.199 is t4): hping3 -I eth1 --rand-source --syn --flood 192.168.4.199 -p 80 t4 drops all incoming packets to avoid confusion with gateways, performance issues of t4 etc. I watch the packet stats in iptraf. I have configured the firewall (t3) as follows: stock 3.2.0-31-generic #50-Ubuntu SMP kernel rhash_entries=33554432 as kernel parameter sysctl as follows: net.ipv4.ip_forward = 1 net.ipv4.route.gc_elasticity = 2 net.ipv4.route.gc_timeout = 1 net.ipv4.route.gc_interval = 5 net.ipv4.route.gc_min_interval_ms = 500 net.ipv4.route.gc_thresh = 2000000 net.ipv4.route.max_size = 20000000 (I have tweaked a lot to keep t3 running when t1+t2 are sending as many packets as possible). The result of this efforts are somewhat odd: t1+t2 manage to send each about 200k packets/s. t4 in the best case sees aroung 200k in total so half of the packets are lost. t3 is nearly unusable on console though packets are flowing through it (high numbers of soft-irqs) the route cache garbage collector is no way near to being predictable and in the default setting overwhelmed by very few packets/s (<50k packets/s) activating stateful iptables rules makes the packet rate arriving on t4 drop to around 100k packets/s, efectively losing more than 75% of the packets And this - here is my main concern - with two old P4 machines sending as many packets as they can - which means nearly everyone on the net should be capable of this. So here goes my question: Did I overlook some importand point in the config or in my test setup? Are there any alternatives for building firewall system especially on smp systems?

    Read the article

  • What are the advantages of registered memory?

    - by odd parity
    I'm browsing for a few low-end servers for a startup and I'm a bit confused about the different memory types. The advantage of ECC is clear - single-bit error correction. When it comes to registered memory it seems more vague, especially in systems that support both registered and unbuffered memory. A Google search mostly finds copies of the Wikipedia article, which states that registered memory chips "...place less electrical load on the memory controller and allow single systems to remain stable with more memory modules than they would have otherwise". However I can't find any quantification of this. What I'm wondering about is: Is registered memory an improvement over unbuffered when it comes to soft error rate, or is it purely about the maximum number of modules supported? If yes, at what point (amount of modules or GB of memory) do these improvements start to become noticeable? For a specific example, the HP ProLiant DL 120 G6 server manual states that maximum supported memory configuration is 16 GB unbuffered (4x4GB) or 12 GB registered (6x2GB). In this case I'd rather have the extra 4GB of memory if the reliability difference is negligible.

    Read the article

  • thought about shared storage (NFS, Lustre) [closed]

    - by user134880
    Possible Duplicate: Can you help me with my capacity planning? Now I habe small cluster with total of 8 nodes. 6 of them are computing nodes (apache and vmware) and 2 nodes are for storage. 2 storage nodes are identical. Each storage server is linux box with 8 x 1Tb WD RE4 in soft raid 10. 1st box is master and 2nd is slave. Data is mirrored with DRDB. We export NFSv4 shares to Apache (for document root) and iSCSI to Vmware. Now all is working pretty good and stable. But it will be soon time to upgrade our system. I have been thinking of Lustre. Does some one has any real experience with Lustre or NFS medium clusters? Will it be good idea just to upgrade server and change hdd's to 3Tb ? With NFS we will always have only 2 servers to maintain (one primary and one slave). Thanks. QUESTIONS: 1) Does some one used Lustre? In production? I have seen a lot of info about how it is hard to setup Lustre because you need to compile own kernel and patches. It's answers from newbies. Is there some one who has used Lustre for some period of time? 2) About disk upgrades - it's only description of strategy. I'm not asking if it is enough 3Tb or not. I just ask if it is right just to replace hdds instead of adding new server (like with Lustre) Thanks again.

    Read the article

  • My 3D Games crashing in these scenarios

    - by desaivv
    I have a situation here which I am unable to solve. I bought a PC last year March, here are my specs: Intel Core i3 550 @ 3GHz 4 GB RAM @400 MHz XFX GeForce 9500 GT graphics card 1GB @550MHz 500 GB HDD Lately as soon as I load my save game of Skyrim, it crashes. I have been playing Skyrim since I joined Gaming.SE site. Crashes as in the entire scene gets red lines. I can not ALT + TAB back or even CTRL + ALT + DEL either. My only recourse is a hard reset via the power button. Can not take a screen shot either. I have the latest Forceware 296.10 drivers also. This has been happening since the last 2 weeks. I always use Driver Sweeper to clean my old drivers, since that is what XFXForce recommends before installing new drivers. I installed MSI Afterburner lately to see my GPU temperature. My GPU is default, never over-clocked it. In MSI's Afterburner, I can not adjust fan speed. It is greyed out. Also in settings there is no fan tab. With normal Internet browsing it stays at 51 C. Ran Memtest86 over night with level 11. Took about 13 hours, but no errors in my RAM. I even re-installed my OS, with just the 296 drivers. The fan for the GPU does come on. I can play Diablo 2. I can not get past Warcraft 3's menu selection. There WAS some dust in my machine, but I always try and keep everything clean, since in my home town dust is an issue. Always keep cool my entire PC cabinet. My friend came with his functioning graphics card, we bought our PCs at the same time with exact same specifications. His card did not work either. Same problem with the scene freezing with red lines. I did do my research before posting here. That is how I was able to learn about MSI Afterburner, Driver Sweeper, SpeedFan etc. I followed posts on Tom's Hardware too regarding people that had similar problems. One person suggested and was followed by worked as well the suggestion to "Bake the card in an oven". Since I bought it, played Diablo 2 for months, Starcraft 2 campaign for months and Skyrim recently for months. Bought ME3 also. I am at my wits end. I do not know what else to do. I can go out and buy a card, but my friend's card did not work either. I can use the machine for Eclipse or VS2010 development just fine. Just not with 3D gaming. I originally posted this question on Gaming.SE But I was directed here. I have browsed the SU database for my problem and found this, this, and this. But none of these cover my question. My machine is only one year old. Can some experienced superuser(s) shed some light on this scenario? Is it a 3D graphics card problem? Will a brand new card work? What else can I try to pin point the problem? Can it be the Motherboard? Thanks.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >