Search Results

Search found 18736 results on 750 pages for 'support lifecycle'.

Page 172/750 | < Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • GA 8KNXP Rev1.0: 4 GB installed, only 3.5 GB recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4 GB. The specification from the manual says: Maximum memory support: 4 GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my Linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in Linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • Websockets Server with Fault-Tolerance and Durable Message Store

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for. EDIT So, after some research the following installed options seem to be the most robust: Kaazing Migratory Migratory (http://migratory.ro) Hosted services that seem "real" Pusher (great API but no history feature yet) PubNub (has history) All of the above services have graceful fallback to other communication methods if websockets are not available. I was not able to find any open source that provided "out of the box" clustering, fail-over, and a durable message store to play back history. There are some projects that may serve as good starting points, but not exactly what I am looking for.

    Read the article

  • GA 8KNXP Rev1.0: 4GB installed, only 3.5 recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4GB. The specs from the manual say: Maximum memory support: 4GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

  • Does a mini PCIe SSD fit into a Acer Aspire One?

    - by Narcolapser
    Question: What, if any, mini PCIe SSDs fit into the mini PCIe slot of the Acer Aspire one AOD250? Info: I have an Aspire One and I've been considering loading it with an SSD. The mini PCIe drives fascinate me and so I want to try that approach. Also they tend to be cheaper and not much slower. (at least not on Read time which matters more for a netbook) But I've heard that some times computers don't support certain mini PCIe cards. And I was wondering if anyone knew about the Aspire One? I tried asking Acer tech support, but they didn't know jack and spent the whole time informing that I would have to support my Ubuntu install on my own, which I was. Anyway. Rant Aside, I'm looking at this drive: http://www.newegg.com/Product/Product.aspx?Item=N82E16820183252 It states it is exclusively for the Eee PC. now does that mean It was designed for the Eee PC but will work in my netbook. or is something going to go wrong? (like right now my concern is it physically not fitting.) Any information would be appreciated. o7

    Read the article

  • L2TP over IPSec VPN with OpenSwan and XL2TPD can't connect, timeout on Centos 6

    - by Disco
    I'm setting up LT2p over IPSec on my Centos 6.3 fresh install. I have iptables flushed, permit all. Whenever I try to connect, i get a 'no reply from vpn' and nothi Here's my ipsec.conf file (Server is 1.2.3.4) : config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=1.2.3.4 leftprotoport=17/1701 right=%any rightprotoport=17/%any My /etc/ipsec.secrets 1.2.3.4 %any: PSK "password" My sysctl.conf (appened lines) net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.all.log_martians = 0 net.ipv4.conf.default.log_martians = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.icmp_ignore_bogus_error_responses = 1 Here's what 'ipsec verify' gives: # ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.32/K2.6.32-279.19.1.el6.x86_64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing for disabled ICMP send_redirects [OK] NETKEY detected, testing for disabled ICMP accept_redirects [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] And I see xl2tpd is listening on 1701/udp : udp 0 0 1.2.3.4:1701 0.0.0.0:* 2096/xl2tpd

    Read the article

  • Domain workstation acting up and I can't track it down.

    - by DevNULL
    I have a developer with a Windows XP (SP2) 64 bit machine. If the machine is left on overnight (or any period of time longer than 5-6 hours) it takes 2-3 minutes to open any local drive and his network drives are no longer accessible. Here's what the system logs report... Any Help BTW: The problem just started a week ago and nothing has changed on the domain controller / AD or his machine. --- ERROR 1 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5719 Date: 6/8/2010 Time: 9:17:26 AM User: N/A Computer: BFC1 Description: This computer was not able to set up a secure session with a domain controller in domain UR due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. ADDITIONAL INFO If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5e 00 00 c0 ^..A --- ERROR 2 The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {555F3418-D99E-4E51-800A-6E89CFD8B1D7} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19). This security permission can be modified using the Component Services administrative tool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- ERROR 3 Event Type: Error Event Source: RemoteAccess Event Category: None Event ID: 20106 Date: 6/8/2010 Time: 10:12:18 AM User: N/A Computer: BFC1 Description: Unable to add the interface {E76F0A78-7A0B-4EBB-A081-BA3BD452FC4C} with the Router Manager for the IP protocol. The following error occurred: Cannot complete this function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: eb 03 00 00 e...

    Read the article

  • VGA to S-Video/Video converter showing mashed up picture.

    - by Matthijs Wessels
    Earlier I asked this question: *http://superuser.com/questions/132374/does-the-lenovo-t60p-vga-port-support-an-s-video-signal As a result I acquired the following item: http://cgi.ebay.nl/ws/eBayISAPI.dll?ViewItem&item=250588098582&ssPageName=ADME:B:EOIBSA:NL:1123 Yea I'm a cheapass... Anyway, it just arrived and now I am trying to get it to work... The manual is not very informative other than telling me, this is the vga in, this is the vga out, this is the s/video out etc. Plus it tells me the system should support the following resolutions@refresh rate: 640x480@60/72/75Hz 800x600@60/75Hz 1024x768@60Hz I can connect it to my laptop and then I connect the S/Video and the Video to my tv which only gives me a blurred image (like when you set your monitor to a resolution it doesn't support). The VGA out however works fine to my tft monitor. The are two switches on the converter. I think one switches between s/video and video and the other between PAL and NTSC. But alas, no combination seems to give a better picture (it does give a different picture). Can anyone help me to solve this problem? I have downloaded this program called powerstrip, but I have no idea how to use it and if it can even solve my problem... Thanks in advance. I use Windows XP on a Lenovo t60p and I try to connect it to a Philips 32PFL7403D/12 LCD TV from VGA to a converter to S-video or video.

    Read the article

  • picking a linux compatable motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • How to Re-install an App that Shows up in the Appstore as 'Update' Instead of 'Buy App'

    - by Craig Reville
    So long story short: I dropped the wrong app into 'clean my mac' and I hit 'cancel' but it was too late by that point. I rebooted and appstore said it had an update, when I opened appstore it was showing an update for the app I just uninstalled. I tried clicking 'update' but it gives me an error saying it's unable to install after 'downloading'. When I try to go into 'purchased apps' it shows the app as uninstalled so I click 'install' and I get an error saying it's already installed. I'm running Lion OS X, latest version, updated, mac book pro is only a few months old. I tried searching through the entire system to remove all traces of the app, after rebooting appstore no longer shows the app and no longer shows the update but on the apps page it still says 'Update'. I tried reinstalling the app from desktop OUT of the appstore and again says the app is 'already installed'. So after reading more about lion I found an article that spoke about 'BundleID' being the thing that tells appstore what's installed and needing updating however I can't find the location of where the BundleID would be. Any thoughts? I've tried CCleaner, AppCleaner etc and none of them show the app, mainly because it is uninstalled. Update I've spoken to Apple Support who confirmed that there is a file in the system that connects separately to tell the system if there are updates available however they declined to inform me of any further details. Apple also referred me from technical support to iTunes App Store opposed to Mac App Store support and from there I have been referred to AppleCare who are currently 'investigating' this issue. Hopefully there will be a fix that's simple to implement for people having similar issues, this appears to be a more common issue than I previously thought.

    Read the article

  • How to set up RAID-0 first time on new PC?

    - by jasondavis
    I have built basic PC's in the past but have never used a RAID array at all. SO now I am buying parts to build my new PC, it will be an intel i7 processor. My motherboard will have RAID support which I will use instead of an aftermarket raid controller for now. Also I plan to use 2 SSD drives in RAID-0 for my windows 7 OS. (Please note that I am aware of the issues with doing this, including lack of TRIM support when using RAID with SSD drives. I am OK with it not working as I can just re[place the drives in a year or so or wheneer they become more sluggish). SO here is my question part. If I assemble the motherboard, PSU, processor, RAM, vidm card, etc and then go to turn the PC on, it will have the 2 SSD drives hooked up. so I assume I will then soon the BIOS screen before I install windows? How to I go about making the 2 drives work in RAID-0 at this point? I do the raid part before installing my OS right? Please help with the steps involved from assembling the parts of the PC and then turning it on, to the part of getting the RAID-0 set up between the 2 drives and then installing my windows 7 OS from a Optical drive? Please help, all advice, instructions, tips appreciated as long as on topic. I do not need to be told that this is a bad idea as far as if 1 drive fails I losse it all, I plan on having a disk IMAGE to be able to restore my OS and software to a new set of drives at anytime needed in the event of drive failure. Same goes for lack of TRIM support. Thanks for reading and help =)

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Colorizing your terminal and shell environment?

    - by Stefan Lasiewski
    I spend most of my time working in Unix environments and using Terminal emulators. I try to use color on the commandline, because color makes the output more useful and intuitive. What are some good ways to add color to my terminal environment? What tricks do you do? What pitfals have you encountered? Unfortunately, support for color is wildly variable depending on terminal type, OS, TERM setting, utility, buggy implementations, etc. Here's what I do currently, after alot of experimentation: I tend to set 'TERM=xterm-color', which is supported on most hosts (but not all). I work on a number of different hosts, different OS versions, etc. I'm trying to keep things simple and generic, if possible. Many OSs set things like 'dircolors' and by default, and I don't want to modify this everywhere. So I try to stick with the defaults. Instead tweak my Terminal's color configuration. Use color for some unix commands (ls, grep, less, vim) and the Bash prompt. These commands seem to the standard "ANSI escape sequences" I've managed to find some settings which are widely supported, and which don't print gobbledygook characters in older environments (even FreeBSD4!) (For the most part). From my .bash_profile ### Color support # The Terminal application typically does 'export TERM=term=color' # Some terminal types will print Black, White & underlined with these settings. OS=`uname -s` case "$OS" in "SunOS" ) # Solaris9 ls doesn't allow color, so use special characters instead. LS_OPTS='-F' ;; "Linux" ) # GNU tools supports colors! See dircolors to customize colors export LS_OPTS='--color=auto' # Color support using 'less -R' alias less='less --RAW-CONTROL-CHARS' alias ls='ls ${LS_OPTS} export GREP_OPTIONS="--color=auto" ;; "Darwin"|"FreeBSD") # Most FreeBSD & Apple Darwin supports colors # LS_OPTS="-G" export CLICOLOR=true alias less='less --RAW-CONTROL-CHARS' export GREP_OPTIONS="--color=auto" ;; esac

    Read the article

  • IIS SMTP server (Installed on local server) in parallel to Google Apps

    - by shaharru
    I am currently using free version of Google Apps for hosting my email.It works great for my official mails my email on Google is [email protected]. In addition I'm sending out high volume mails (registrations, forgotten passwords, newsletters etc) from the website (www.mydomain.com) using IIS SMTP installed on my windows machine. These emails are sent from [email protected] My problem is that when I send email from the website using IIS SMTP to a mail address [email protected] I don’t receive the email to Google apps. (I only receive these emails if I install a pop service on the server with the [email protected] email box). It seems that the IIS SMTP is ignoring the domain MX records and just delivers these emails to my local server. Here are my DNS records for domain.com: mydomain.com A 82.80.200.20 3600s mydomain.com TXT v=spf1 ip4: 82.80.200.20 a mx ptr include:aspmx.googlemail.com ~all mydomain.com MX preference: 10 exchange: aspmx2.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx3.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx4.googlemail.com 3600s mydomain.com MX preference: 10 exchange: aspmx5.googlemail.com 3600s mydomain.com MX preference: 1 exchange: aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt1.aspmx.l.google.com 3600s mydomain.com MX preference: 5 exchange: alt2.aspmx.l.google.com 3600s Please help! Thanks.

    Read the article

  • what determines the bios you can use

    - by andy
    I have a Compaq sr5710f with a MCP61PM-HM (Iris8) motherboard and loaded a Geforce6100sm-m bios on to it. It works fine and opened up some options, but I want to give my comp am3 socket support. So my question is if I go looking for a bios that will do that for me what do I need to pay attention to. Is it only the chip set which is geforce 6150se nforce 405 or are there more things I should be looking at. If anyone thinks they might know a bios that will help me out that would be good to. I am looking for a retail bios though. I do not want to load any of the experimental ones you can find on the net. Also I would need it to support ddr2 800 ram, and would like to keep am2 support to. I know my bios is data that is downloaded onto my motherboard, or for lack of a better word a program. I am currently running a bios that is not for my motherboard, so I know it can be done. What I need is what do I need to pay attention to when looking for compatible bios programs, that is not for my motherboard.

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • How to verify power provided to processors is clean

    - by GregC
    Once in a blue moon, I am seeing a blue screen of death on a shiny new Dell R7610 with a single 1100 Watt Dell-provided power supply on a beefy UPS. BCode is 101 (A clock interrupt was not received...), which some say is caused by under-volting a CPU. Naturally, I would have to contact Dell support, and their natural reaction would be to replace a motherboard, a power supply, or CPU, or a mixture of the above components. In synthetic benchmarks, system memory and CPU, as well as graphics memory and CPU perform admirably, staying up for hours and days. My questions are: Is power supply good enough for the application? Does it provide clean enough power to VRMs on the motherboard? Are VRMs good enough for dual Xeon E5-2665? Does C-states logic work correctly? Is there sufficient current provided to PCIe peripherals, such as disk controllers? P.S. Recently, I've gone through the ordeal with HP. They were nice and professional about it, but root cause was not established, and the HP machine still is less than 100%, giving me a blue screen of death once in a couple of months. Here's what quick web-searching turns up: http://www.sevenforums.com/bsod-help-support/35427-win-7-clock-interrupt-bsod-101-error.html#post356791 It appears Dell has addressed the above issue by clocking PCIe bus down to 5GT/sec in A03 BIOS. My disk controllers support PCIe 3.0, meaning that I would have to re-validate stability. Early testing shows improvements. Further testing shows significant decrease in performance on each of the x16 slots with Dell R7610 with A03 BIOS. But now it's running stable. HP machine has received a microcode update in September 2013 SUM (July BIOS) that makes it stable.

    Read the article

  • Is Hyper-V server suitable as a desktop testbench?

    - by Thomas.Winsnes
    At the moment we are running a test bench with several desktop computers, that are reimaged every time we need to test on a different operation system. Also because different versions of our software is tested on each image, we have to install our software every time we want to test it. The problem we have had with going with a virtualization technology is that our software is depending on directx/opengl and 3D acceleration, and this has not been something that virtual machines have excelled at. With the release of SP1 for Windows 7 and Server 2008 R2 Hyper-V has gotten better 3D acceleration support, so we are looking into virtualizing our testbench using this. Our test scenario would most likely be something close to this: 1. Remote into the hyper-V server and load the test VM needed for the current tests 2. Remote into the VM and install the new version of the software 3. Run the tests It would be nice, but not essential, if our support team could remote into the VMs to match the users OS+software combination when doing support. Does anyone have any experience with this kind of settup with hyper-v?

    Read the article

  • tomcat vs FULL J2EE Solutions

    - by jrhickey
    We are getting ready to make a major revision in our Web Application architecture which currently is running on JBoss 4.2. At first we were looking at moving from 4.2 to JBoss 6 but after some research tomcat may be a better solution for us. My first question is their anything that JBoss can do that tomcat cannot do assuming you are using the correct plugins. We do not really use EJB's in our solution and it would appear there are simple plugins for web services, JMX and other features. Tomcat appears to have much better support, faster upgrade cycles and many, many books. Since there is less to the system it also seems much easier to support from an admin point of view. What am I missing? The main features we want to enable are better clustering support and session replication / persistence. We will consider other application servers as well such as Glassfish / Geronimo. quote form a web article: Apache Tomcat is the world’s most widely used web application server, with over one million downloads per month and over 70% penetration in the enterprise datacenter. Tomcat is used to power everything from simple one server sites to large enterprise networks.

    Read the article

  • Wifi Works with Android and Windows 8 but not Linux and Win 7

    - by eramm
    Support has told me that our company wide wifi network is setup to support mobile phones only. However it doesn't make sense to me that they can identify a mobile device rather they have setup the Access Point to use a protocol that is only supported on Android and Windows phones. Because the Access Point supports Windows mobile this means that laptops running Windows 8 can also connect to the Access Point (proven). So it stands to reason that since Android is based on Linux there must be a way to connect using Linux as well. iwlist shows IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : TKIP CCMP Authentication Suites (1) : 802.1x WIreshark seems to show that a connection is being made to a website to get a certificate and use a Domain Controller for authentication. Questions: 1) what protocol could they be using that is supported on Win Mobile and Android but not on Win 7 and Linux (Debian) ? 2) what tools can I use to help me discover what protocol i need to support ? I have used iwlist and wireshark but I was not able to glean to much useful information from them. I can post the results if needed. 3) is there an app i can use on my Android phone to help me understand what kind of network it is connecting to ? I can provide more information if you tell me how to get it. I just don't know what I am looking for.

    Read the article

  • Partition is missing in /dev

    - by haimg
    I'm having a strange problem since I moved from Centos5 to Centos6. I have three disks, first two are used as a RAID1, and third one is a stand-alone backup disk that is not listed in /etc/fstab (it is mounded when needed and then unmounted). My problem: After a boot, /dev/sdc exists but /dev/sdc1 does not. Also, the links in /dev/disks are also absent for the first partition of sdc. Disk itself is fine, and if I hot-remove it and plug it back in, /dev/sdc1 appears ok and everything is working. My question: What subsystem manages auto-discovery of disks, partitions, etc. during the boot process (e.g. what creates /dev/disks/by-label)? How do I configure it to scan /dev/sdc too and create all relevant files and links in /dev ? Edit: Here's the relevant part of dmesg output (the only place sdc appears). It does list sdc1, but it's not in /dev! sd 1:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 3:0:0:0: [sdc] 976773168 512-byte logical blocks: (500 GB/465 GiB) sd 1:0:0:0: [sdb] Write Protect is off sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 3:0:0:0: [sdc] Write Protect is off sd 3:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 3:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sdb: sdc: sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 0:0:0:0: [sda] Write Protect is off sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sda: DMAR:[DMA Read] Request device [00:1e.0] fault addr 361bc000 DMAR:[fault reason 06] PTE Read access is not set sdb1 sdb2 sdb3 sdc1 sda1 sd 1:0:0:0: [sdb] Attached SCSI disk sd 3:0:0:0: [sdc] Attached SCSI disk sda2 sda3 sd 0:0:0:0: [sda] Attached SCSI disk

    Read the article

  • How to install Red Hat Enterprise Linux on Apple Macbook Pro MacBookPro4,1

    - by Todd V. Rovito
    I have a one year old Mac Book Pro that I am trying to get RHEL 5.4 installed on via bootcamp. No matter what I do I can't get the installer to boot. I have tried multiple DVD's and even verified the install works on a new Mac Book Pro. Most of the time the installer simply locks up. I usually use Linux text with all-generic-ide on the boot line. I removed the ide parameter and I just used linux text. The results I get are that a bunch of kernel messages appear then the background turns blue and a thin text box pops up saying its loading ata..... something it disappears too fast for me to read. Then the machine freezes. I pressed the alt function keys to see if I could look at the system log, here is what it says: Alt-f3 says "trying to mount CD device hda" Alt-f4 says status error: hda: lastFailedSense Hda: Failed opcode was: unknown Hda: Lost interrupt Hda: Drive not ready for command Ide-cd: command 0x3 timed out Above this junk it looks like it found the partition because it knew it was 20 GB and listed as /dev/sda3. I think it has something to do with the CD drive, is that possible? Thanks again for the support. PS I posted in the apple support forums ( Apple.com Support Discussions Boot Camp Installation and Storage) and didn't get an answer.

    Read the article

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • What character can be safely used for naming files on unix/linux?

    - by Eric DANNIELOU
    Before yesterday, I used only lower case letters, numbers, dot (.) and underscore(_) for directories and file naming. Today I would like to start using more special characters. Which ones are safe (by safe I mean I will never have any problem)? ps : I can't believe this question hasn't been asked already on this site, but I've searched for the word "naming" and read canonical questions without success (mosts are about computer names). Edit #1 : (btw, I don't use upper case letters for file names. I don't remember why. But since a few month, I have production problems with upper case letters : Some OS do not support ascii!) Here's what happened yesterday at work : As usual, I had to create a self signed SSL certificate. As usual, I used the name of the website for the files : www2.example.com.key www2.example.com.crt www2.example.com.csr. Then comes the problem : Generate a wildcard self signed certificate. I did that and named the files example.com.key example.com.crt example.com.csr, which is misleading (it's a certificate for *.example.com). I came back home, started putting some stars in apache configuration files filenames and see if it works (on a useless home computer, not even stagging). Stars in file names really scares me : Some coworkers/vendors/... can do some script using rm find xarg that would lead to http://www.ucs.cam.ac.uk/support/unix-support/misc/horror, and already one answer talks about disaster. Edit #2 : Just figured that : does not need to be escaped. Anyone knows why it is not used in file names?

    Read the article

< Previous Page | 168 169 170 171 172 173 174 175 176 177 178 179  | Next Page >