Search Results

Search found 19134 results on 766 pages for 'support contract'.

Page 624/766 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • CSV engine on MySQL server

    - by Jeff
    I don't think that this is a programming question so I am going to ask it here - Reading the book high performance mysql, I read about the CSV engine. The paragraph says: The CSV engine can treat comma-separated values (CSV) files as table, but it does not support indexes on them. This engine lets you copy files in and out of the database while the server is running. If you export a CSV file from a spreadsheet and save it in the MySQL server's data directory, the server can read it immediately. Similary, if you write data to a CSV table, an external program can read it right away. CSV tables are especially useful as a data interchange format and for certain kinds of logging. What I get from this paragraph is that I can copy a .CSV file into the data directory of database, and it should show as a table that is able to be read from. However, whenever I copy a test .csv file into the directory, it does not appear as a table. I can't access it. I am using MySQL 5.5 also Does anyone know why this is not working, or what I am doing wrong? Thanks

    Read the article

  • Should windows services be created with custom users, or should I use one of LocalSystem/LocalServic

    - by Justin Dearing
    I'm asking the question in general for the average custom developed NT service or unix OSS daemon ported to windows with SCM support. However, at the moment my immediate concern is for mongodb. From my experience with UNIX I like all my services to run as different unprivileged users. The way this has translated to windows is as follows: Create a local (or domain if it has to talk to SQL server) windows user with a long random password (lately an ASCII85 encoded guid generated from a different machine). Set it to next expire and forbid it from changing its password. Remove that user from the "Users Group". Grant that user "Login as a Service" permission. Give it read permission to the folder where the app resides, and write permission to the logs and data files the applications use. Assign the user to the service. Troubleshoot until the service starts. My feeling is that the unprivileged users are less powerful than the 3 special service users. I also feel that by isolating which users run which services, I would limit collateral damage if a way to compromise one service was found.

    Read the article

  • VMware Data Recovery error -3960 and Event ID 8193 on Windows Server 2003

    - by flooooo
    I've been trying to solve this problem since a few days now without any success. What I'm trying is to make a backup of a virtual machine running Windows Server 2003 SP 2 using VMware Data Recovery 2.0.0.1861. When starting the backup task it tries to make a snapshot of the virtual machine using VSS which fails with error: Event Type: Error Event Source: VSS Event Category: None Event ID: 8193 Date: 05.06.2012 Time: 12:12:01 User: N/A Computer: LEGOLAS Description: Volume Shadow Copy Service error: Unexpected error calling routine RegSaveKeyExW. hr = 0x800703f8. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 2d 20 43 6f 64 65 3a 20 - Code: 0008: 57 52 54 52 45 47 52 43 WRTREGRC 0010: 30 30 30 30 30 33 39 36 00000396 0018: 2d 20 43 61 6c 6c 3a 20 - Call: 0020: 57 52 54 52 45 47 52 43 WRTREGRC 0028: 30 30 30 30 30 33 31 38 00000318 0030: 2d 20 50 49 44 3a 20 20 - PID: 0038: 30 30 30 30 36 34 38 38 00006488 0040: 2d 20 54 49 44 3a 20 20 - TID: 0048: 30 30 30 30 34 33 38 34 00004384 0050: 2d 20 43 4d 44 3a 20 20 - CMD: 0058: 43 3a 5c 57 49 4e 44 4f C:\WINDO 0060: 57 53 5c 53 79 73 74 65 WS\Syste 0068: 6d 33 32 5c 76 73 73 76 m32\vssv 0070: 63 2e 65 78 65 20 20 20 c.exe 0078: 2d 20 55 73 65 72 3a 20 - User: 0080: 4e 54 20 41 55 54 48 4f NT AUTHO 0088: 52 49 54 59 5c 53 59 53 RITY\SYS 0090: 54 45 4d 20 20 20 20 20 TEM 0098: 2d 20 53 69 64 3a 20 20 - Sid: 00a0: 53 2d 31 2d 35 2d 31 38 S-1-5-18 This machine was converted p2v. I have no idea where to search for the problem and what to do. Google showed a few result but none of them were useful for me. Please help me. If you need further information I'll tell you - just ask!

    Read the article

  • Mac OSX: which folders should ClamXav Sentry watch?

    - by trolle3000
    I'm using ClamXav on my mac. I've read this, and I am aware of the whole macs-need-no-AV-but-they-do-anyway discussion. I guess that's why I would feel like a real ass if I somehow managed to compromise my system! So ClamXav has been downloaded and ClamXav Sentry set up to start on log-in, but it doesn't really do anything before you tell it to. Specifically, you have to tell it which folders to watch for virusses/vira so I'm wondering, where are good places to look? Currently it's been set up to look the following places: In the home folder: ~/Downloads ~/Library/Caches ~/Library/Contextual Menu Items ~/Library/Cookies ~/Library/Internet Plug-Ins ~/Library/LaunchAgents In my system folder: /Library/Application Support /Library/Caches /Library/Contextual Menu Items /Library/Cookies /Library/Internet Plug-Ins /Library/LaunchAgents /Library/LaunchDaemons /Library/Startupitems Basically, this is 100% conjecture. All (most of) the folders have something to do with internet and things that start up automatically, so I'm guessing that's where vira go. But still, the qustion: Which folders should ClamXav Sentry watch, if any? FYI, I'm not using any mail app's, but please include that in your answer for anyone who might be interested. Cheers!

    Read the article

  • Plesk wildcard subdomain not working

    - by avdgaag
    I'm trying to set up a wildcard subdomain on my VPS. Ultimately I want to end up with this: main site: my.domain.tld subdomain: sub1.my.domain.tld - should end up serving my.domain.tld/sub1 I am using plesk 8.6. I have created a DNS A record pointing at my VPS' IP. I have then restarted the DNS server and waited up to 24 hours. But trying ping sub1.my.domain.tld results in an unknown host error. So I know there's more stuff involved, configuring apache etc. But so far, I cannot even get the subdomain working at all, let alone serve up the right content. I have also tried a CNAME record, to no effect. I have also tried creating a regular subdomain with a fixed name, which also does not work. Pre-configured subdomains DO work, like ftp.my.domain.tld or mail.my.domain.tld. I am clearly missing something here, but my hosting provider charges a small fortune for any support request not involving hardware physically burning down, so I'm hesitant to ask them. Any ideas?

    Read the article

  • Websites down EC2 inaccessible via SSH CPU utilisation 100% last few hours - what should I do?

    - by fuzzybee
    I have multiple websites hosted on 1 single EC2 instance. 1 website "abc" were down for a few hours, sometimes threw database connection error and sometimes just took too long to respond. 1 website "def" were incredibly slow but still up and running the rest of the websites had the same symptoms has "abc" I can afford 15 min or less down time for "def". Should I then (in AWS console) reboot my instance or create an AMI image from my instance and launch it and associate my elastic IP to the new instance or "launch more like this" Background on what may have happened to my ec2 The last time I made changes for 21 hours ago. A cronjob to create snapshots ran around 19 hours ago and it has been running for a long time. Google Analytics shows traffic to my websites such as kidlander.sg has been nothing exceptional. Is there any other actions I should take or better options I could have? (I have already contacted AWS support but their turnaround is 12 hours so I appreciate all the help I could get) Update I got everything back up and running and CPU utilisation back to normal, around 30%. There is 1 difference between "def" and "abc" as well as my other websites "def"'s database is hosted on RDS "abc"'s database is hosted on an EC2 instance (different from my web server instance) configured by myself Nevertheless, I checked the EC2 instance I'm using as MySQL server yesterday and it was absolutely fine during the incident low CPU ultilisation I could log in using linux command line

    Read the article

  • Exchange Online SMTP Not Working With Any Email Client

    - by emre nevayeshirazi
    I am trying to switch our company mail server to exchange online. I have successfully added my domain and users and can send and receive mails through Outlook Web App. I can also send and receive if I configure my Outlook 2013 client using Exchange protocol. However, some folks in company are using Thunderbird and some old Outlook Clients. For those, I tried to connect to Exchange via IMAP/SMTP. This is what I use, For incoming, IMAP / Port : 993 with SSL / Host : outlook.office365.com For outgoing, SMTP / Port : 589 with TSL / Host : smtp.office365.com I can receive emails, however I could not be able to send emails. I keep getting An error occurred while sending mail. The mail server responded: 4.3.2 Service not active. Please verify that your email address is correct in your Mail preferences and try again. My username and password are correct, I am using my mail address as my username to mailbox. I also tried sending mail via C# application which was working for outlook.com and gmail.com SMTP settings. It also fails to send emails and returns the same error code. I thought TB and other old clients such as Office 2003 might not support Exc. Online so I tried same settings in Office 2013. It successfully connected my mailbox when checking for configuration but failed in sending test message and returned the same error code. Configuration for incoming and outgoing mailbox are taken from here. They are also available on Office 365 user page and they are same. What could be the reason for error ?

    Read the article

  • SQLRelay MySQL compatibility layer in php-cgi.

    - by sybreon
    I am investigating the use of sqlrelay as a middle-layer between an application that uses MySQL with a PostgreSQL backend. I assume that this is something that it can do to ease backend migration. But for the moment, I am just experimenting with a MySQL application accessing a MySQL backend through the sqlrelay layer. app => sqlrelay lib => mysql client lib => tcp => mysql server I followed the instructions for the MySQL drop-in replacement and it works. I can connect to the backend MySQL server using both sqlrsh and mysql client application. It will work for most MySQL applications by using LD_PRELOAD with the compatibility layer library. The instructions recommend re-compiling php to support it. I would prefer not doing something so drastic. They also recommend setting the LD_PRELOAD for apachectl as a method for the apache/php stack. However, this does not work with lighttpd/php-cgi. I have wrapped php-cgi with a shell script that sets LD_PRELOAD before running the cgi script. LD_PRELOAD=/usr/lib/libmysql50sqlrelay-0.39.4.so.1 /usr/bin/php5-cgi $@ I can see LD_PRELOAD correctly set in phpinfo() but the cgi scripts all fail and are unable to connect to the database. According to the mysql client, the compatibility library should report itself as a 5.0.0 client but the phpinfo module reports itself as the actual 5.0.51a client library used. This means that the compatibility library was not used. Is there someone who has had some success doing something similar?

    Read the article

  • GMail and Yahoo Mail servers not accepting mails from my slicehost slice

    - by Lakshmanan
    Hi, I have a rails in one of the slices at Slicehost. I've setup postfix (sendmail) to send emails from my rails app. All emails to Google Apps domain (to company setup google hosted paid email id) are getting delivered properly (but to spam folder). But all emails to [email protected], [email protected], .. @hotmail.com are not getting delivered and this is the line from my /var/log/mail.log Dec 21 17:33:56 staging postfix/smtp[32295]: 5EB4810545B: to=<[email protected]>, relay=j.mx.mail.yahoo.com[66.94.237.64]:25, delay=1.6, delays=0.02/0.01/1.5/0, dsn=4.0.0, status=deferred (host j.mx.mail.yahoo.com[66.94.237.64] refused to talk to me: 553 Mail from 173.203.201.186 not allowed - 5.7.1 [BL21] Connections not accepted from IP addresses on Spamhaus PBL; see http://postmaster.yahoo.com/errors/550-bl21.html [550]) and this is what i got for gmail Dec 21 17:29:17 staging postfix/smtp[32216]: 0FA3310545B: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[74.125.65.27]:25, delay=0.59, delays=0.02/0.01/0.09/0.47, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.65.27] said: 550-5.7.1 [173.203.201.186] The IP you're using to send mail is not authorized 550-5.7.1 to send email directly to our servers. Please use the SMTP relay at 550-5.7.1 your service provider instead. Learn more at 550 5.7.1 http://mail.google.com/support/bin/answer.py?answer=10336 v49si11176750yhc.16 (in reply to end of DATA command)) Please help. I have very little knowledge about setting dns, servers and stuff.

    Read the article

  • i7 overclocking problem

    - by benwebdev
    Hi everyone, I've got an overclocked system that seems to be misbehaving. When trying to boot up from cold the system just hangs and nothing is output to the screen. Fans are on as they should but nothing happens to finish the boot. From here I have to switch it off then on again and the boot completes. If I go into the tweak menu in the BIOS I'm informed that a boot has failed I've been in touch with Overclockers UK support a bit and theres not yet been a solution. We've mainly been tweaking the voltage for the CPU. Any suggestions? I'm new to Overclocking which is why I got a bundle with OCUK. With this issue happening on Cold Boot too its tricky to test as I have to make a change then wait till the next day. My system is here: Intel Core i7 930 2.80Ghz overclocked to 4GHz Gigabyte X58A-UD3R (BIOS Version: FC) 6GB RAM Power Supply - CoolerMaster Silent Pro M series 700W One suggestion made by OCUK was that maybe its the power supply but I'm not sure and dont have a spare - it's brand new and a pretty expensive piece of kit. Any thoughts on this? Other recomendations for Power? thanks Ben

    Read the article

  • Diagnosing SAN connectivity issues (RHEL5)

    - by Matthew
    We are currently utilizing GFS2 to share a SAN LUN between 3 servers. However due to a feature problem with vendor software we are using, we currently have the volume unmounted on two of the boxes, and are instead exporting the GFS2 filesystem via NFS from the first one (the software requires some weird locking mechanics that GFS2 doesn't support). As of this morning, NFS was no longer able to read/write to the volume from any of the servers, including the NFS server. I then tried checking the normal mount (the directory that is exported on the NFS server) and I received a weird input/output error just trying to CD into it. When I tried running multipath, I got a DM error, however multipath -l worked just fine. I tried unmounting the GFS2 volume, and the CLI hung. I ran init 0 which killed most services, but then the shutdown appeared to have been hung. I logged in via out of band access (hp ILO) and saw that the shutdown was hung trying to unmount GFS2 volumes. My main priority was getting the box back online so after about 5 minutes of waiting I did a hard reset. I am now trying to figure out what went wrong. What are the correct logs to investigate? I've never run into SAN issues like this before. The SAN is connected via 2 fibre connections. Any help would really be appreciated. Everything appears to be up and functional now.

    Read the article

  • SharePoint Records Center Submitted E-mail Records not picked up

    - by Kenneth Verburg
    We have set up a new SharePoint 2007 site with a Records Repository. We're using Exchange 2007 Managed Folders to route e-mails to this repository based on the 'label' attached to the e-mail as set in the Exchange 2007 journaling options. E-mails added to a Managed Folder get sent to SharePoint, they end up in the "Submitted E-mail Records" list of the Records Repository. That's according to plan, but the e-mails are not routed to the respective document library as defined by the label. Instead an error appears in the event viewer for every e-mail listed in the Submitted E-mail Records list, on every interval of the records repository schedule (set to every two minutes for testing purposes): Value cannot be null, parameter name: g. Sending a document from the SharePoint site iself to the Records Repository via the Send To... link works fine, but e-mails get stuck in the list... We have set Document Libraries in the Respository with and without content types (with matching names with the Label and the Record Routing rule set). Any ideas what could be wrong? This is in the event log: Every two minutes the following error appears in the Application Log: Source: Office SharePoint Server Category: Records Center Type: Error Event ID: 4975 User: N/A Computer: SPS2007 Description: Value cannot be null. Parameter name: g For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    Hi all, I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

  • Getting 2560x1600 out of an ATI Radeon HD 4670 on Windows 7

    - by Alexey
    Greetings, I've got a Dell Studio XPS 1640 laptop (with an ATI Radeon HD 4670 graphics card) running Windows 7, and just bought a Dell 3007HPC 30-inch monitor for it. I'm trying to figure out how to get the full 2560x1600 experience out of this setup. Here's what I've done so far: Plug in using an HDMI cable and an HDMI--D-DVI converter on the monitor side. Open up Screen Resolution. Maximum supported setting is 1920x1080. Tried that (several times) - sometimes it doesn't work at all (blank screen); other times, it only shows the first 1280x800 pixels on the bigger screen. Tried using the Catalyst control center - played with various settings there, couldn't get the screen to show anything interesting. Tried using PowerStrip to set a custom resolution, again, no luck. Spoke to a Dell Preferred Custom Support guy for about an hour before giving up. He remote-accessed my computer, and told me that (1) The maximum supported resolution for XPS 1640 is 1920x1080, and (2) 'it seems to be working from where he sees it, must be a connection issue'. None of this has helped. Does anybody have ideas? Should I be using a different cable set up? Am I using Powerstrip wrong?

    Read the article

  • ADO 2.8, VMWare and SQL Server - sudden bulk dropped connections

    - by CodeByMoonlight
    We have an overworked server currently running a single SQL Server 2000 instance on physical hardware, and about 40 different apps interact with it on a daily basis. Last year, the RAID controller failed and we had no spare, so IT Support hurriedly migrated it overnight to a copy running on a VMWare Server. While it was on that server everything ran much quicker due to it being a big improvement in spec. However, the biggest app using it had occasional serious errors which never occurred on physical hardware. Specifically, several times a week it would disconnect batches of users - anywhere from just ten to hundreds at once, and all at the same time. It didn't affect any particular users or PCs or offices - all were affected equally. The only common thing was the app, which is a VB6 app using ADO 2.8 to connect. The other apps connecting to that virtualised instance of SQL Server seemingly had no problems, although they were (and are) responsible for only a tiny fraction of the work involving this server. The upshot is that after about two weeks of loving the speed and hating the random mass disconnections (which we were never able to find a cause for), we sadly took the decision to return to physical hardware and the disconnections vanished. Now we've reached the point where the old server just can't handle all that's being asked of it, and we're intending to migrate everything to 2 or more other servers. The snag is that there's a good chance they'll have to be virtual ones again. Given what happened last time, I'm trying to find out what possible reasons there could be for these mass disconnections. We were running VMWare ESX, but the network is Novell-based. Also, the server had a linked server setup to connect to an Informix server using a known-to-be-buggy ODBC driver, and this is used throughout the day. Any ideas on the cause(s)?

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • Script errors when run by launchd at startup, but not when run in Terminal

    - by Mechcozmo
    Hello. I'm attempting to create a RAM disk that loads the previous contents when the system starts up, and every six hours writes the contents to a disk image. Currently, when you run the script from the terminal ("sudo bash LogToRAM.sh") everything works fine. But when run from launchd during startup, it doesn't work. Here's the lines from the log; the first line just gives some idea as to where in the boot process we are: SecurityAgent[202] Showing Login Window com.mechcozmo.LogToRAM[51] + /Developer/usr/bin/SetFile -a V /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] ERROR: File Not Found. (-43) on file: /Volumes/LogfileRAMdisk com.mechcozmo.LogToRAM[51] + /usr/sbin/asr -source '/Library/Application Support/LogToRAM/RAMdisk_store.dmg' -target /Volumes/LogfileRAMdisk/ -noverify Here is the script and plist file in question. Note that 'set -vx' is up at the top of the script; it give a lot of information about what is happening in the script. My current theory is that the /Volumes directory does not exist at this stage of the boot process, but that seems unlikely to be honest.

    Read the article

  • CMS/Wiki to use for a HTML5 video site

    - by Clinton Blackmore
    Greetings. I want to put up a website with instructive screencasts, and allow for people to add comments to them. I would like use the Video for Everybody technique, partly because I dislike Flash and because it helps in a small way to move the web forward [while being backwards compatable]. I recognize that HTML5 is still in draft, and that support for it varies. I do have some hosting space, and can run Perl, PHP, and Ruby on Rails applications, with a MySQL backend. I should mention that part of my working job involves running some web servers, and that I am a programmer by training (with only a limited familiarity with Perl and PHP, and none with Ruby). I should mention why I don't particularly want to go with a video hosting site (like YouTube or Vimeo): Flash Video Resolution and Quality [I'd like to put up 800x600 videos] Videos promote a club that is not stricly non-profit [ie. may fall afoul of Terms of Service] I'm already paying for web hosting, and free video hosting comes with time and bandwidth limits I don't want there to be two locations where you can comment on the video Now, having said all that, I'd be quite comfortable putting up my own HTML pages, except: that's so web 1.0! :) [ie. it does not allow for comments] I also want to do some blogging and possibly put up a wiki; the site will not be entirely screencasts So, can anyone recommend a CMS (or Wiki, or similar application) that I can customise for this purpose?

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Compiling Gearman PHP Library for CentOS 5.8

    - by Andrew Ellis
    I've been trying to get Gearman compiled on CentOS 5.8 all afternoon. Unfortunately I am restricted to this version of CentOS by my CTO and how he has our entire network configured. I think it's simply because we don't have enough resources to upgrade our network... But anyways, the problem at hand. I have searched through Server Fault, Stack Overflow, Google, and am unable to locate a working solution. What I have below is stuff I have pieced together from my searching. Searches have told said to install the following via yum: yum -y install --enablerepo=remi boost141-devel libgearman-devel e2fsprogs-devel e2fsprogs gcc44 gcc-c++ To get the Boost headers working correctly I did this: cp -f /usr/lib/boost141/* /usr/lib/ cp -f /usr/lib64/boost141/* /usr/lib64/ rm -f /usr/include/boost ln -s /usr/include/boost141/boost /usr/include/boost With all of the dependancies installed and paths setup I then download and compile gearmand-1.1.2 just fine. wget -O /tmp/gearmand-1.1.2.tar.gz https://launchpad.net/gearmand/1.2/1.1.2/+download/gearmand-1.1.2.tar.gz cd /tmp && tar zxvf gearmand-1.1.2.tar.gz ./configure && make -j8 && make install That works correctly. So now I need to install the Gearman library for PHP. I have attempted through PECL and downloading the source directly, both result in the same error: checking whether to enable gearman support... yes, shared not found configure: error: Please install libgearman What I don't understand is I installed the libgearman-devel package which also installed the core libgearman. The installation installs libgearman-devel-0.14-3.el5.x86_64, libgearman-devel-0.14-3.el5.i386, libgearman-0.14-3.el5.x86_64, and libgearman-0.14-3.el5.i386. Is it possible the package version is lower than what is required? I'm still poking around with this, but figured I'd throw this up to see if anyone has a solution while I continue to research a fix. Thanks!

    Read the article

  • Why are group policy preference drive mappings not applied to the domain administrator account?

    - by Saariko
    I have a working policy on my entire domain. I just found out, when logging with the domain administrator, that this policy is not applied (EDIT: Running : gpresult shows that the GPO's are applied - but, this GPO is for Drive Mappings, and the actual drive mappings are NOT shown) The administrator account - does not have any login script on his profile tab. To note: The mappings were applied before the GPO with a login script using the : net use ... command - all was working perfectly and correctly for the domain administrator user as well - That removes sharing and security problem (IMO) My GPO's are mainly small/atomic settings: single GPO to handle each settings: UAC, Firewall, printers. GPO status for the object is enabled That's an overview of the Drive Maps: Reading on MS support site, I checked the delegation tab, and it is marked as applied to domain and enterprise admins. Every user gets these policies correctly. The OU that is set is the root of the domain. (for testing purpose - I did that to eliminate hierarchy issues - did not help) Block Inheritance is disabled. (never used it anyway) GPO link GPO Security Filterings

    Read the article

  • OpenSwan (IPSEC) on Fedora 13 with Snow Leopard as a client

    - by sicn
    I recently installed OpenSwan on my Fedora 13 machine. I want to use it to connect with Mac OS X with L2TP over IPSEC, unfortunately I am already stuck on the IPSEC-negotation part. My server is running behind a NATted firewall so my external IP differs from the server's IP. The server has a fixed IP on the network and the same is almost always valid for the clients (they are usually behind a NATted firewall). I installed OpenSwan on Fedora 13 and have following configuration: config setup protostack=netkey nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off nhelpers=0 conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=my.servers.external.ip leftprotoport=17/1701 right=%any rightprotoport=17/0 IPSEC starts fine and listens to UDP 500 and 4500. These two ports are opened in the firewall and are forwarded fine to the server. In my /etc/ipsec.secrets file I have my.servers.external.ip %any: "LongAndDifficultPassword" And finally in my sysctl.conf (the redirect-entries are there because OpenSwan was strongly protesting about send/accept_redirects being active) I have net.ipv4.ip_forward = 1 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.accept_redirects = 0 Running "ipsec verify" gives me "all greens" (except Opportunistic Encryption Support, which is DISABLED), however, when trying to connect my Mac gives me following in the logs: Nov 1 19:30:28 macbook pppd[4904]: pppd 2.4.2 (Apple version 412.3) started by user, uid 1011 Nov 1 19:30:28 macbook pppd[4904]: L2TP connecting to server 'my.servers.ip.address' (my.servers.ip.address)... Nov 1 19:30:28 macbook pppd[4904]: IPSec connection started Nov 1 19:30:28 macbook racoon[4905]: Connecting. Nov 1 19:30:28 macbook racoon[4905]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Nov 1 19:30:31 macbook racoon[4905]: IKE Packet: transmit success. (Phase1 Retransmit). Nov 1 19:30:38: --- last message repeated 2 times --- Nov 1 19:30:38 macbook pppd[4904]: IPSec connection failed Any ideas at all?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Mac "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • IPSec for LAN traffic: Basic considerations?

    - by chris_l
    This is a follow-up to my Encrypting absolutely everything... question. Important: This is not about the more usual IPSec setup, where you want to encrypt traffic between two LANs. My basic goal is to encrypt all traffic within a small company's LAN. One solution could be IPSec. I have just started to learn about IPSec, and before I decide on using it and dive in more deeply, I'd like to get an overview of how this could look like. Is there good cross-platform support? It must work on Linux, MacOS X and Windows clients, Linux servers, and it shouldn't require expensive network hardware. Can I enable IPSec for an entire machine (so there can be no other traffic incoming/outgoing), or for a network interface, or is it determined by firewall settings for individual ports/...? Can I easily ban non-IPSec IP packets? And also "Mallory's evil" IPSec traffic that is signed by some key, but not ours? My ideal conception is to make it impossible to have any such IP traffic on the LAN. For LAN-internal traffic: I would choose "ESP with authentication (no AH)", AES-256, in "Transport mode". Is this a reasonable decision? For LAN-Internet traffic: How would it work with the internet gateway? Would I use "Tunnel mode" to create an IPSec tunnel from each machine to the gateway? Or could I also use "Transport mode" to the gateway? The reason I ask is, that the gateway would have to be able to decrypt packages coming from the LAN, so it will need the keys to do that. Is that possible, if the destination address isn't the gateway's address? Or would I have to use a proxy in this case? Is there anything else I should consider? I really just need a quick overview of these things, not very detailed instructions.

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >