Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 793/1027 | < Previous Page | 789 790 791 792 793 794 795 796 797 798 799 800  | Next Page >

  • RD Gateway reporting features/capabilities

    - by Don
    We have just implemented RD Gateway for our own department in preparation for a push to the whole agency for telecommuting. It is all setup and working great, but I was trying to figure out how best to go about monitoring/reporting of users. I see third party software out there that will do it, but is there anything built-in or via powershell/scripting that I could use that would give me a report of the daily activity of users? Something to say, "User A connected at this time, was on for this long, sent/received this much data"? Basically some of the same stuff you can see in event viewer. Ideally I'd like to be able to have this setup so that once a day it emails me with the daily usage for when a supervisor asks about if their person is actually working (or at least online sending and receiving x amount of data), I'll have some metrics to give them. I realize that actual work output is relevant and more of a managerial issue, but I would like to be able to offer as much as I can from my end when asked. Thoughts? Thanks!

    Read the article

  • Choose a VPN software on CentOs 6.5

    - by loyCossou
    We are installing a SMS gateway with Kannel, on a CentOs 6.5 server, which is supposed to connect via SMPP to our local operators. Kannel is working fine and no probleme there. Now 2 operators are asking to connect via a VPN for obvious security reasons; actually they asked for or VPN details so they can connect to it... Now, I am looking for a free VPN that I can setup and configure on our server... I saw Open VPN that I already started configuring, no issue... But just saw on Wikipedia (http://en.wikipedia.org/wiki/OpenVPN#Platforms) that OpenVPN is not compatible with other VPN packages. Now my question is: 1- I am absolutely new to VPN technologies. Is OpenVPN a good choice in my situation? 2- If I configure OpenVPN on my server, will it be possible to any client to connect to my server? 3- Anyone have any advice for me? Thank you for this great community.

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Broken Python installation on CentOS 5.8

    - by Beckett
    I already searched for solution to my problem via Google and stackoverflow's search facility, but haven't found anything related specifically to it. Here's the problem: I needed python 2.7.3 on CentOS 5.8 machine which has only python 2.4.3 preinstalled. Also neither there's the suitable version in it's repositories nor I can upgrade installed version. That's why I decided to build python from source code. But I've made a mistake: instead of make altinstall I did make install thus changing default version of the current installation. It was before I found this article - How to install Python 2.7.3 on CentOS 6.2 . I guess 5.8 and 6.2 versions aren't different to the extent this article is inapplicable. After installation of new python version I installed pip, but once I tried to invoke it, I got "No module named pkg_resources" error. In order to solve this issue I installed setuptools from repository. But it had only led to another error: "Distribution Not Found". My final step was to follow the guide I posted the link to, but I was unable to perform last step: easy_install-2.7 virtualenv command threw "-bash: /usr/local/bin/easy_install-2.7: .: bad interpreter: Permission denied" error. Now when I try to invoke pip or pip-2.7 both commands raise the same error with different names of binaries after "-bash:". Is there any way to fix this problem, so I could install new python version (2.7.3) alongside with the preinstalled one (2.4.3) according to the guide? Any help will be appreciated. P.S.: yum is working fine, although it needs python to function, so I hope the damage I unknowingly caused isn't very severe. Also I'm not a native English speaker, so I apologize for possible occasional grammatical and/or spelling errors.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • Automated Linux VMs on Hyper-V 2012

    - by Mick
    I have a requirement to create a ton of linux VMs for our customers (we run managed infrastructure) on Hyper-V 2012 in the coming months and I have an issue with automating it. Here is how I need it to work: User accesses their web page and creates a VM. VM is created with a unique IP and name User logs in over SSH I know Hyper-V quite well and can work with powershell and am a C# programmer so the development side of things is taken care of. I also know enough about Linux to be at least competent: I have used it on and off for a number of years but not done anything Enterprise-level with it. All this can be done easily by manual processes but I need to be able to script or program this to automate it as there could be hundreds of them being created but I don't know how. My first thought is to have a database with random-generated names and IPs already created but I don't know how to get a Linux VM to boot up and grab one from the database... I suppose a Kickstart script would take care of it but I don't know what to do from there. Here is what is bouncing around in my head: Create a std linux build. - Easy to do Someone clicks "Create VM" and I pull a name and IP from the database and write it to a kickstart script. - Easy to do I could then open the template VHDX file and copy in the script and then save it. - Not sure if possible User boots up new VM and the kickstart script gives it the name and IP I assigned it. My problem is that I don't know how to open a VHDX file and insert a kickstart script into it... can't figure it out. I am reaching here and this solution may be miles off... I am more used to creating Windows VMs with scripts and so on which i am more familiar with... any help would be appreciated. Thanks Mick

    Read the article

  • very slow internet with Linksys WRT54GL only in wireless mode (wired is OK)

    - by gojira
    I bought a new Cisco Linksys WRT54GL router to connect my laptop (running Windows 7) to the internet. I installed Tomato 1.28 firmware on the router. When I connect the laptop to the router via ethernet cable, everything is fine and I get extremely fast up- and download speeds. When I connect wirelesssly however, websites load extremely slow - it takes dozens of seconds to load a website! <-- This is my question, how can I fix the wireless speed issue? Gmail for example is unusable this way. I tried speedtest.net, but this always fails in the upload part of the test so I can't even test the bandwidth (could the fact that it fails in the upload part, not the download part, be an indication what the problem is?!). I have isolated the problem a bit, I am convinced it has to do either with the router itself, the router settings, or the settings of the wireless connection in Win 7. Because previously, I was using another router by Buffalo and I had no problems whatsoever. I have tried to reproduce the settings from the Bufallo router as closely as possible on the Linksys router (same channel, same encryption etc). The download speed problem only occurs with the Linksys router, and only in wireless mode! When I exchange the Linksys router with the Buffalo router I have here for testing, the wireless speed is up to normal again. Also, before I had installed the Tomato firmware I had exactly the same problem, so it has nothing to do with the firmware itself. Notes & things I already tried: Changing the channel: does not seem to affect anything, I am also on the same channel (10) which I was previously on when I had a Buffalo router. QoS is off. Ping to the router itself is OK, ~ 1 ms. Some current settings of the linksys router: WAN / Internet Type: DHCP Wirelesss Mode: Access Point B/G Mode: Mixed Broadcast: check Channel: 10 - 2.457 GHz Security: WPA2 Personal Encryption: AES

    Read the article

  • Mailgun Is Not Detecting My New MX Records

    - by Tyler Crompton
    When I issue a DiG command to verify my MX records, I get the following output: $ dig example.com MX ; <<>> DiG 9.9.5-3-Ubuntu <<>> example.com MX ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47700 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;example.com. IN MX ;; ANSWER SECTION: example.com. 85468 IN MX 10 mxa.mailgun.org. example.com. 85468 IN MX 10 mxb.mailgun.org. ;; REMAINDER OF OUTPUT REMOVED FOR BREVITY However, when I click "Check DNS Records Now" on Mailgun, it verifies the changes to the TXT and CNAME records but says that my MX records have not been changed. Type | Priority | Enter This Value | Current Value -----+----------+------------------+-------------------- MX | 10 | mxa.mailgun.org | 10 mail.example.com MX | 10 | mxb.mailgun.org | 10 mail.example.com I updated these records three to fours ago. I know it said to wait up to twenty-four to forty-eight hours. But I feel that if it detected the other DNS changes, then it should detect the MX record changes. Am I being impatient or is this a legitimate concern? What do you suggest I do? Note: I'd create a Mailgun tag for this; I feel that it'd be appropriate, but I don't have enough reputation to do so.

    Read the article

  • Installed Bunch of New Fonts on Windows 7 - Now None Show Up and System Lags

    - by Josh Stodola
    So I went to install about 5,000 fonts on my Windows 7 64-bit machine. It was slow to install them, and I had to leave. I came back and my PC was shut down, and I had to go through the Windows recovery BS when I powered it on. Now my computer runs EXTREMELY slow and any program that has a font menu locks up my whole machine (nothing in Microsoft Office works). When I go to "Fonts" in the control panel, it says 0 items. I went through all of the font settings trying to get them to appear. Nothing helps. I tried to bring up the Character Map and that froze up my machine too. How can I fix this? If I do not get this issue resolved soon, I am wiping this drive and going back to XP (and probably never purchasing another version of Windows again). I never had any issues with XP and have had nothing but performance problems when switching to Windows 7. My quad-core intel extreme with 8GB of RAM should never flinch with the kind of work that I do, and something simple like playing a song off an external HD takes up to five seconds on Windows 7. Unbelievable that I had to pay for this crap!

    Read the article

  • How do you initialize networking on a new Xen guest VM?

    - by Marten Veldthuis
    We have a Citrix XenServer setup, and while I personally lean more towards Dev than Ops, I've got an issue that's been bugging me. When you provision a new (Linux/Ubuntu) guest, how do you get it to have the correct IP-address? I'd want my application servers to exist in the range of 10.20.0.0/24, preferably being .1, .2, etc, so I can keep my sanity. I guess that the actual IP-address is something set in Linux itself, and Xen can't touch that, but then what's the best practice for getting it done? If you set up DHCP, don't you just move the problem to getting the adapters the "correct" MAC-addresses? Do you just have to hardcode a large table of MAC-addresses to IP-addresses, and then provision new guests always with the correct MAC-address on the virtual ethernet adapter? What we currently do is have an image of a "app server" that we boot up a new instance of, and then finalize it (with a script) that (among other things) modifies the /etc/networking/interface file to give it the correct IP. But that feels dirty to me, and I feel like surely there must a better way. Please enlighten me?

    Read the article

  • Exchange 2010 OWA- Open Other User Mailbox

    - by Benjamin Jones
    I just started working for this small firm (30 people) a little bit ago, replacing their System Admin. First thing I noticed was Exchange Server 2010 was WAY out of date. Believe it or not they did not have SP1 installed. So after I installed and configured Exchange 2010 SP3 and redirected OWA I noticed something in OWA. I could add ANYONE's User Mailbox WITHOUT giving mailbox premission. I created a couple test users, same thing. I even had another employee provide me access to their OWA and they could open anyone's Inbox without granting permission. I don't want to play the blame game, but I was SHOCKED that this was going on. Luckly being such a small company I'll be able to cover this mistake that I did not create, BUT HOW? My guess is that I need to find out where the past System Admin went wrong in providing Full Access Permission? Or could this be a Auto-Mapping issue? I found this article: http://technet.microsoft.com/en-us/library/hh529943.aspx This might work $FixAutoMapping = Get-MailboxPermission sharedmailbox |where {$_.AccessRights -eq "FullAccess" -and $_.IsInherited -eq $false} $FixAutoMapping | Remove-MailboxPermission $FixAutoMapping | ForEach {Add-MailboxPermission -Identity $_.Identity -User $_.User -AccessRights:FullAccess -AutoMapping $false} However how do I insert the above code into Powershell? Again I was thrown into this mess and I'm just trying to iron out this tangled mess.

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Looking for some IIS redirect help/ideas

    - by CoreyT
    Right now we have a site with a LOT of static asp pages such as, www.site.com/123.asp. This is due to how our current site's CMS builds it's pages by default. I don't have an exact count but we have roughly 6000 asp files in the site right now. We are in the middle of a redesign and restructuring of the site, and are looking to migrate to SEO friendly URLs. The problem we're having right now is what do we do to redirect the old pages to the new friendly URLs? I know how to do redirects that is not the issue here. The problems I am coming up with right now are listed below. 1 - Is there a limit to the number of redirects in IIS? 2 - Would having even a few thousand redirects affect IIS performance? 3 - My understanding is that we would not be passing along page rank to the new URLs, is that true? (not a major question I can ask on more SEO forums if nobody here is sure) 4 - Would using something like the IIS URL Rewrite 2 module for IIS 7 help us out? Or would I still need to define several thousand unique redirects in it? Our server right now is running Server 2003, however in the redesign I would be open to migrating to Server 2008 R2 if there is a good case for it (i.e. the URL Rewrite module). Thanks for any guidance or help. I have been looking for a good way to do this for a while now and keep coming up with things that sound problematic and bad (such as having 6000 redirects).

    Read the article

  • mod_fcgi in virtualmin: graceful kill fail, sending SIGKILL?

    - by mgjk
    Yesterday around 1am, our server ground to a crawl. This doesn't happen often, but I'm trying to get to the bottom of it. There is no unusual traffic volume, no unusual processes running, just all of the sudden the server started killing fcgid processes. [Thu Aug 02 01:17:32 2012] [warn] mod_fcgid: process 26460 graceful kill fail, sending SIGKILL ... for as many fcgid processes as we have... CPU idle fell to 0% and I/O seemed to take up most of the load. The issue lasted about 5 minutes. I suspect there was some swap activity, although I'm not sure if it was due to killed processes being swapped in to die, or if it was because some process ramped up memory usage faster than my process watching scripts can see them. The oom-killer wasn't triggered (at least it's not logged), so I think this was Apache for some reason restarting the processes. This is not regular, and nothing obvious appears in cron. Is there a normal Apache process which might cause this? We run dozens of different sites, and it was late at night, so volume was very, very low. (maybe 200 requests in a 10 minute period).

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • How can I setup a Proxy I can sniff traffic from using an ESX vswitch in promiscuous mode?

    - by sandroid
    I have a pretty specific requirement, detailed below. Here's what I'm not looking for help for, to keep things tidy and on topic: How to configure a standard proxy Any ESX setup required to facilitate traffic sniffing How to sniff traffic Any changes in design (my scope limits me) I need to setup a test environment for a network-sniffing based HTTP app monitoring tool, and I need to troubleshoot a client issue but he only has a prod network, so making changes to the config on client's system "just to try" is costly. The goal here is to create a similar system in my lab, and hit the client's webapp and redirect my traffic - using a proxy - into the lab environment. The reason I want to use a proxy is so that only this specific traffic is redirected for all to see, and not all my web traffic (like my visits to serverfault :P). Everything will run inside an ESX 4.1 machine. In there, there is a traffic collection vswitch in promiscuous mode that is not on the local network for security reasons. The VM containing our listening agent is connected to this vswitch. On the same ESX host, I will setup a basic linux server and install a proxy (either apache + mod_proxy or squid, doesn't matter). I'm looking for ideas on how to deploy this for my needs so I can then figure out how to set it up accordingly. Some ideas I've had were to setup two proxies, and have them talk to eachother through this vswitch in promiscuous mode, but it seems like alot of work. Another idea is a dual-homed proxy, but I've never seen/done that before so I'm not sure how doable it is for what I'd like. I am OK with setting up a second vswitch in promiscuous mode to facilitate this if need be, but I cannot put the vswitch on the lan (which is used so my browser would communicate with the proxy) in promiscuous mode. Any ideas are welcome.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Hard crash when using bluetooth headset on MacBook Pro and Lion 10.7.2

    - by jtalarico
    I recently picked up a bluetooth headset (Motorola S10-HD) and started using it with my MacBook Pro (17" purchased new in 2010) running Lion 10.7.2. Here's what works - stereo audio: iTunes Spotify Pandora (via browser) games (e.g. Minecraft, which is a Java app) audio from YouTube Plex, VLC, other video players Here's what doesn't work well - stereo audio fails and the headset seems to go into mono (i.e. tinny-as-hell) mode: Google Hangout Skype GoTo Meeting Here's what's just downright catastrophic. If I'm listening to stereo audio and then decide to jump into a Skype call (Google Hangout, or GoTo Meeting), bluetooth often crashes and I can only get things working again by shutting down the device, disabling bluetooth, and getting things back up and running again. But the audio is still horrible, and MUCH better using just a simple set of iPhone earbuds and mic. About 80% of the time during such a call, bluetooth crashes. And about 90% of the time, after the call ends, Skype is shut down, or I try to switch back to playing stereo audio, I get a hard crash!! The gray screen of death descends and I'm told I need to restart my machine. In one such instance, even after a reboot, I could not enable bluetooth again ("Turn Bluetooth On" was grayed out in taskbar). Is this just a weak implementation of bluetooth by Apple, or is this a hardware issue? I've seen others posting similar issues even on the Apple support site indicating that bluetooth headsets are failing left and right, but I haven't seen anyone mention hard crashes.

    Read the article

  • Growing a Linux software RAID5 array

    - by chrismetcalf
    On my home file server, I've got a 1.5TB software RAID5 array, built from four 500gb Western Digital drives. I've got a fifth drive that I usually run as a hot spare (but have out of the array at the moment), but if I can I'd like to add that to the array and grow it to 2TB since I'm running out of space. I Googled for guidance, but there seem to be a lot of differing opinions out there (many of them probably now out-of-date) as to whether or not that is possible and/or smart. What's the right way to go about this, or should I start looking into building a new array with more space? Version details: %> cat /etc/issue Debian GNU/Linux 5.0 \n \l %> uname -a Linux magrathea 2.6.26-1-686-bigmem #1 SMP Sat Jan 10 19:13:22 UTC 2009 i686 GNU/Linux %> /sbin/mdadm --version mdadm - v2.6.7.2 - 14th November 2008 %> cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid1 hdc1[0] hdd1[1] 293033536 blocks [2/2] [UU] md0 : active raid5 sde1[3] sda1[0] sdc1[2] sdb1[1] 1465151808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    Read the article

  • Ubuntu "No space left on device" for /home, df shows 100% full, ds shows much, much less

    - by Jon Cram
    On an Ubuntu 12.04 server, normal users can no longer create or add to files in /home, encountering a "No space left on device" error. The /home directory has a capacity of 1.7 terabytes and as far as I can tell is nowhere near full in terms of actual data stored or inodes used. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/md2 1.0T 18G 955G 2% / udev 7.7G 4.0K 7.7G 1% /dev tmpfs 3.1G 320K 3.1G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.7G 0 7.7G 0% /run/shm cgroup 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/md3 1.7T 1.7T 0 100% /home /dev/md1 496M 45M 426M 10% /boot /home indeed looks rather full. du -hs /home suggests otherwise: 1.4G /home There appears no inode issue - df -i: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 67108864 75334 67033530 1% / udev 2013497 527 2012970 1% /dev tmpfs 2015816 440 2015376 1% /run none 2015816 2 2015814 1% /run/lock none 2015816 1 2015815 1% /run/shm cgroup 2015816 9 2015807 1% /sys/fs/cgroup /dev/md3 113909760 105981 113803779 1% /home /dev/md1 131072 239 130833 1% /boot I recently deleted a many gigabytes of application cache and log data from /home, however this was in the tens of gigabytes at best and nowhere near the capcity of /home. Update 1: du -hs --apparent-size /home 1.2G /home du -hs /home 1.4G /home What might be going on here?

    Read the article

  • pam_tally2 causing unwanted lockouts with SCOM or Nervecenter

    - by Chris
    We use pam_tally2 in our system-auth config file which works fine for users. With services such as SCOM or Nervecenter it causes lockouts. Same behavior on RHEL5 and RHEL6 This is /etc/pam.d/nervecenter #%PAM-1.0 # Sample NerveCenter/RHEL6 PAM configuration # This PAM registration file avoids use of the deprecated pam_stack.so module. auth include system-auth account required pam_nologin.so account include system-auth and this is /etc/pam.d/system-auth auth sufficient pam_centrifydc.so auth requisite pam_centrifydc.so deny account sufficient pam_centrifydc.so account requisite pam_centrifydc.so deny session required pam_centrifydc.so homedir password sufficient pam_centrifydc.so try_first_pass password requisite pam_centrifydc.so deny auth required pam_tally2.so deny=6 onerr=fail auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 minclass=3 minlen=8 lcredit=1 ucredit=1 dcredit=1 ocredit=1 difok=1 password sufficient pam_unix.so sha512 shadow try_first_pass use_authtok remember=8 password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so The login does work but it also triggers the pam_tally counter up until it hits 6 "false" logins. Is there any pam-ninjas around that could spot the issue? Thanks.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Grub error 18, gparted not showing anything

    - by Montecristo
    Some week ago I started having some problems with my pc, sometimes it just freezed not allowing me to do anything. I had to turn it off and on and sometimes do it a couple of time even at startup. Now it does not start at all, grub is giving me error 18. I have found that a solution is to create a bootable partition in the first sector of the disk. gparted does not recognize any partition, the window in which there would be my partitions is empty. sudo fdisk -l does not output anything. If I type sudo mount /dev/sda and then tab tab to autocomplete these are the devices coming out: sda sda1 sda2 sda5. If I launch sudo mount -t ext3 /dev/sda1 disk I get the following error: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg outputs [ 1831.974847] EXT3-fs: unable to read superblock Do you know how to solve this issue? I'm not completely sure this is a software problem, should I try with a new hard disk?

    Read the article

< Previous Page | 789 790 791 792 793 794 795 796 797 798 799 800  | Next Page >