Search Results

Search found 9894 results on 396 pages for 'primary interop assembly'.

Page 302/396 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • How do I get "Back to My Mac" (using MobileMe) from Windows?

    - by benzado
    I have a MobileMe subscription and a Mac at home with "Back to My Mac" enabled. When I'm away from home, this service lets me use another Mac to connect to my Mac back home and access file sharing, screen sharing, etc. As far as I know, the service doesn't use any proprietary protocols, so in theory I should also be able to get "Back to My Mac" from a Windows PC. This MacWorld article explains how it works. Basically, it uses Wide-Area Bonjour to give your Mac a domain name like hostname.username.members.mac.com. Remote computers can find your Mac using that address, then connect to it using a private VPN. The "Wide Area Bonjour" part seems to make it a little more complicated than simply a regular domain name, though. Note that I'm not interested in using the methods described by LifeHacker, which doesn't use the MobileMe service at all. I don't want to use a totally different dynamic DNS service. I'd like to use the one I'm already paying for, or at least find out why that's not possible from Windows. Also, my primary problem is finding a network route back to my mac... once I've got that I know how to enable services so that Windows can talk to it. UPDATE: Based on some additional research, it appears that Apple is only assigning IPv6 addresses to the hostname.username.members.mac.com names. So any solution will require enabling IPv6 support on Windows, if possible.

    Read the article

  • Running DNS locally for home network

    - by Roy Rico
    I have a small home network that just got larger ( New roommate, My existing roommate got a laptop (on top of her computer)j, my friends coming over with laptop, etc ). I'd like to run a local DNS server for lookups of my local network stuff (fileserver.local, windowsTV.local, machineA.local, machineB.local, appletv.local). I used to have a business line with a static IP, and run bind/named internally. However, now, I have a normal account. My ISP's DNS servers are constantly changing (for whatever reasons my ISP doesn't like to keep the same IP range for long). I need my local DNS to be automatically updated to use my ISP's DNS for external traffic, but be able to maintain an internal DNS server (getting to update the hosts file is being a hassle with every new machine on top of rebuilding existing machines with win7 or Ubuntu 9.04). Additionally, My ISP's DNS servers often crash or become unresponsive. Are there any open DNS servers that are reliable (i don't want to reconfig every day) that I could use as my primary, then if those fail, then use my ISP's? UPDATE: Also looking for each workstation to be able to use dhcp to connect, but instead of getting ISP dns servers, getting my internal one.... Thanks

    Read the article

  • SPF record for Gmail?

    - by Chris
    I have DNS, with a SPF TXT record, configured for a domain name. The primary user of the domain name now needs to be able to send both from our SMTP servers, and also from her GMail account. I've seen all the information about adding "include:_spf.google.com" to the SPF TXT record, but, as I look into it, it appears that record is outdated. In particular, I had the user send me a test message, and note that it was: Received: from mail-la0-f50.google.com (mail-la0-f50.google.com [209.85.215.50]) However, _spf.google.com doesn't list that IP address: $ dig +short _spf.google.com txt "v=spf1 ip4:216.239.32.0/19 ip4:64.233.160.0/19 ip4:66.249.80.0/20 ip4:72.14.192.0/18 ip4:209.85.128.0/17 ip4:66.102.0.0/20 ip4:74.125.0.0/16 ip4:64.18.0.0/20 ip4:207.126.144.0/20 ip4:173.194.0.0/16 ?all" (Note that a 209.85.21*8*.0 network is listed, but not 209.85.21*5*.0.) Is there a better way to enable sending from GMail? This user sends to at least one recipient with a strict SPF policy that bounces mail not from a designated host... Many thanks!

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • How to make Exchange 2003 non-authoritive

    - by Romski
    Background We are a small company with an internally hosted Exchange 2003. It receives email for 2 domains (the company was renamed a few years back). For the sake of argument, the domains are: oldname.com newname.com We have moved newname.com to a hosted exchange service, and our DNS record is correctly routing emails. Our internal server still receives email for oldname.com, although we have asked our hosting company to accept emails for that domain. Problem My problem is that emails generated internally from monitoring software, printer, etc. are being caught by our (defunct) internal server and being delivered to the old mailboxes. I believe that what is happening is that our internal exchange server considers itself to be the authoritive server for newname.com. I think it must be looking in active directory for a mailbox and delivering it internally without ever going outside. Attempt to fix I started to follow the article here: http://support.microsoft.com/kb/321721. I removed the SMTP recipient policy for newname.com, and added a dummy address and made it primary. I also answered yes for updating the associated emails. I then restarted the Microsoft Exchange Routing System and SMTP, but emails are still being routed internally. Is there a way to force the exchange server to route all emails for the domain newname.com to the new hosted service?

    Read the article

  • Linux Mint 13 is not booting on dual boot computer

    - by Brian
    thanks in advance for your time. I have 2 hard drives in my computer a 300 GB drive which is my primary drive for windows 7 and a 1.5 TB drive that I'd used for storage. When I got it I partitioned 500 GB for use in Linux. So, I created a bootable USB and clicked the "Install by Current Operating System" option from Mint. It installed it to the free 500 GB like I'd hoped it would. Now, I can't get it to boot though. I've tried using EasyBCD to create the boot entry and it hangs on a black screen. Thanks. EDIT @ Ryhuk It presents a menu with two options 1) Windows and 2) Mint. This was a menu I created with easyBCD. When I select option 1 it boots to windows fine. When I select option 2 it hangs on a black screen with just a white bar flashing (Can't remember what its called, it marks the current cursor location on a text field) and won't respond to any key presses but alt ctrl del.

    Read the article

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • adding or routing additional domain email addresses

    - by Mustafa Ismail Mustafa
    We have exchange 2007 and we bought a new domain name and we're still keeping the old one so that we can wean everyone off of the old emails. Now, I'm wondering how to go about this. I need to add the new domain as accepted and authoritative by the exchange server. Emails on the new domain need to get routed to the inbox and ditto the old emails, however, I want to be able to change the reply-to in the header to the new email address automatically. I also want to set the new email addresses as the defaults. Ideally, I'd like to be able to add a message at the bottom of every externally outgoing email saying that the new email is [email protected]. But this is a nice to have, certainly not a must have. I've added the new domain as authoritative, and managed to change the primary smtp email addresses to the new one, but sent emails are not being routed to them and neither are the old email addresses! Now how the heck would I go about fixing all of that? I'm completely stumped! TIA

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Trying to use a SmartHost with my Exchange 2010 server

    - by Pure.Krome
    Hi folks, I'm trying to use a SmartHost with my Exchange 2010 Server. SmartHost details: Secure SMTPS: securemail.internode.on.net 465 <-- Note: that's port 465 Configure your existing SMTP settings (in your email program) to: use authentication (enter your Internode username and password, enter your username as [email protected]). enable SSL for sending email (SMTPS). So I've added the smart host details to my Org Config -> Hub Transport. I then used PowerShell to add the port:- Set-SendConnector "securemail.internode.on.net" -port 465 I've then added my username/password (as suggested above) to the SmartHost as Basic Authentication (with no TLS). Then I try sending an email and I get the following error message :- 451 4.4.0 Primary target IP address responded with: "421 4.4.2 Connection dropped due to ConnectionReset." So i'm not sure how to continue. I also tried ticking the TLS box but stll I get the same error. If i don't use SMTPS (secure SMTP, on port 465) and use basic SMTP on port 25 with no Authentication, email gets sent. Any ideas? EDIT: Btw, I can telnet to that server on port 465 from my mail server .. just to make sure i'm not getting firewall'd, etc.

    Read the article

  • What is the minimal steps to setup a client-server network using Windows Server 2008 R2 standard?

    - by Motivated Student
    Background I have One computer server with Win Server 2008 R2 standard installed but it has not been configured. This server has 2 LAN adapters. One adapter is connected to ISP and the other one connected to HUB/Switch. Other computers working as clients are connected to the same HUB/Switch to which the server is connected. IP Printers, IP scanners, IP camera are also connected to the same HUB/Switch. Note: I am a newbie. I only know how to plug RJ-45 sockets and assembly computer peripherals. I have no prior experience in Windows Server at all. Please teach me from the newbie's point of view. Objective I want to establish the following: Each client can access the internet, printers, scanners after it has been successfully authenticated by the server. Unauthenticated clients cannot access the internet, printers, etc. The server hosts a local site. Clients can browse internally using a private domain www.company.com. If the same domain name has been used by other on the internet, my private domain must override the public domain.

    Read the article

  • LSI MegaRAID LINUX got Optimal after degradation but strange POST message

    - by kesrut
    Linux server box with LSI MegaRAID controller got degraded. But after some time RAID status changed to Optimal. Adapter 0 -- Virtual Drive Information: Virtual Drive: 0 (Target Id: 0) Name : RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 Size : 2.727 TB Mirror Data : 2.727 TB State : Optimal Strip Size : 256 KB Number Of Drives per span:2 Span Depth : 3 Default Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Cached, No Write Cache if Bad BBU Default Access Policy: Read/Write Current Access Policy: Read/Write Disk Cache Policy : Disk's Default Encryption Type : None Is VD Cached: No But now I'm getting RAID BIOS POST message: Your battery is either charging, bad or missing, and you have VDs configured for write-back mode. Because the battery is not currently usable, these VDs willl actually run in write-through mode until the battery is fully charged or replaced if it is bad or missing. (Image: http://cl.ly/image/1h1O093b1i2d) So may it be battery issue caused problem ? I get information about battery: BatteryType: iBBU Voltage: 4001 mV Current: 0 mA Temperature: 22 C Battery State : Operational BBU Firmware Status: Charging Status : None Voltage : OK Temperature : OK Learn Cycle Requested : No Learn Cycle Active : No Learn Cycle Status : OK Learn Cycle Timeout : No I2c Errors Detected : No Battery Pack Missing : No Battery Replacement required : No Remaining Capacity Low : No Periodic Learn Required : No Transparent Learn : No No space to cache offload : No Pack is about to fail & should be replaced : No Cache Offload premium feature required : No Module microcode update required : No Where can be problem ? I'm disabled alarms, but get them if enabled. But don't know how find root of problem.

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Can't ping my Window 7 machine from within a Windows XP virtual machine

    - by Jonathan Conway
    I have Windows 7 installed as my primary operating system, on a laptop that's on my home network (wireless). I'm using Microsoft Virtual PC 2007 SP1 to run a virtual machine of Windows XP SP3, in which I want to access the Windows 7 instance, both to browse a shared folder and access the local Apache server. So far I can ping my Windows 7 IP address (IPv4) and access the apache server through the web browser through HTTP. However using my machine name never seems to work. Pinging it fails, and I can't access my apache server using it either. The problem seems to be something to do with my machine's name being registered under IPv6 rather than IPv4. I'm at a loss what to do. Should I try to set up IPv6 on the virtual machine? Not sure how to go about that. Or maybe I should somehow get my machine name on Windows 7 to work with IPv4? Although I think it already does, because I can ping it from a separate box (running Ubuntu), which is only registered under IPv6.

    Read the article

  • DNS, subdomain, and IPv6 -- possible to add subdomain.example.com NS record to an IPv6 host?

    - by mpbloch
    example.com is listed with a registrar -- specifically, answerable.com. I want to host a subdomain in-house, specifically home.example.com. I am using an ipv6 gateway, specifically gogo6, to have a public IPv6 address. The IP address looks like 2001:xxxx:xx47. Then http://[2001:xxxx:xx47] goes to my test site (an instance of IIS7). I can add a quad-A record for my primary site -- home.example.com AAAA 2001:xxxx:xx47. Then http//home.example.com loads correctly. Must I add an A or quad-A record for all sub.home.example.com to my answerable.com DNS manager for example.com? Or can I delegate DNS queries to *.home.example.com to the machine at [2001:xxxx:xx47]? I have tried to add a AAAA record for tunnel.example.com to [2001:xxxx:xx47], and then add an NS entry for home.example.com to tunnel.example.com, but browsing then results in "DNS lookup error" from my browser. Is this a configurable scenario? Can DNS for subdomain only be delegated to IPv4 addresses?

    Read the article

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • Installation of Active Directory on separate VM from DNS does not entierly work - not sure why

    - by René Kåbis
    Not sure what I am doing wrong here. I have a moderately midrange server (16 cores, 2Ghz, 32GB ECC REG RAM, 6TB storage, nothing too extreme) where I am running Hyper-V (Server 2012 R2 Enterprise) in order to provision virtual machines. So why an AD separate from DNS? I want redundancy. I want to be able to move VMs and back them up individually and not have too many services on any one VM. I have already provisioned a VM with DNS, and have set it up right -- essentially, I have: Set up Static IP’s for everyone involved. Installed the DNS service on the DNS VM. Created a forward lookup zone and a reverse lookup zone (primary zone) xyz.ca Configured the zones to use nonsecure and secure dynamic updates (i will change this to secure later after the domain controller is online). Created a A record for the DC in the forward lookup zone (and a reverse ptr) Changed DC’s DNS server (network settings) to the new DNS server. Checked that I can ping the dns server from the new DC by hostname. When I went ahead and did a DCpromo on the DC, and un-cheked the “install DNS” option, everything seemed to go well (no error messages), but I saw no changes on the DNS server whatsoever (no additional settings). Plus, the DNS server seems to be unable to join the domain, as it claims that the domain is not discoverable. As a final note, I do run Symantec Endpoint Protection, which includes a firewall and most settings set as default. I have not yet tried turning this off, but my experience has been that if a service would open up a port on a Windows firewall, it would do the same through Symantec. There is pretty tight integration these days with corporate-class AV and Windows. I have a template vhdx fully set up (just short of any special roles and features) that I can use to replace the current AD VM with, so doing this all over again is not too much skin off of my nose.

    Read the article

  • How do I connect a 2008 server to a 2003 server active directory?

    - by Matt
    Our DC is running Windows Server 2003. I've just set up Windows Server 2008 and have terminal server running on it. When setting the terminal server permissions, it was able to allow a group name that was read from the domain. In the DC the new terminal server shows up as a computer in the domain. I can also log in as a user within the domain even though that user doesn't exist locally on the new server. However, when I go to set sharing permissions on the new machine it doesn't show my domain as a location. Instead it is only looking at location "machinename" and not allowing domain to be seen or added. Is there something I'm missing? Ok, lots of errors in the event log. We have this: The winlogon notification subscriber is taking long time to handle the notification event (Logon). Followed by this: The winlogon notification subscriber took 121 second(s) to handle the notification event (Logon). Followed by: The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has succesfully processed. If you do not see a success message for several hours, then contact your administrator. I think this might be the same problem I'm having http://serverfault.com/questions/24420/primary-domain-controller-slow Solved. The issue was that I had changed from DHCP to static and put the wrong DNS server IP in. i.e. firewall instead of DC/DNS server.

    Read the article

  • Setup a new domain controller over a temporary VPN, but now Windows delays startup?

    - by Kris Anderson
    I'm migrating servers from colo locations to Amazon's VPC EC2 instances. If anyone hasn't worked with Amazon VPC before, VPN is a pain in the arse! Anyways, I setup a new server that acts as the domain controller for our Amazon VPC. In order to migrate all the user accounts from our existing domain controllers I manually connected to our colo VPN using my user account on the new Amazon EC2 machine. I was able to join the domain and the new Amazon server became another domain controller on our network. So far so good. The problem I'm having is that when booting the EC2 domain controller (which is no longer connected to the VPN so it can't communicate with the existing controllers), it takes a good 6-8 minuted before I can remote into the server (instead of the 1-2 minutes it should take). Also, during this time most of the services we also run (like IIS) also give 404 errors until the 6-8 minutes have passed. It's almost like the domain controller is attempting to reach the other domain controllers first and after 6-8 minutes it falls back to the one located on the local machine? I don't think that's what's happening though, because Server 2008 R2 doesn't have primary and backup domain controllers. They're all equal as far as Windows is concerned. For my network adapter I have only one DNS listed, 127.0.0.1, so it should be looking up the local domain controller and not the other domain controllers it connected to over VPN when VPN was enabled. In the server logs I'm seeing these warnings pop up during a reboot: The winlogon notification subscriber is taking long time to handle the notification event (CreateSession). The winlogon notification subscriber took 409 second(s) to handle the notification event (CreateSession). Any ideas on what's happening here? I would try removing the existing domain controllers from the new Amazon EC2 machine, but I still need to connect over VPN a few times to migrate some data between the servers, and I don't want that change being reflected back to the other domain controllers in our colo locations.

    Read the article

  • How do you automatically close 3rd party applications when LiberKey is shut down?

    - by NoCatharsis
    Within LiberKey, I have added my own portable applications that are not included within the LiberKey library. When you go into the Properties menu for the app in the LiberKey UI, the Advanced tab has an option for Autoexecute. This dropdown menu seems to have no visible effect, at least on my current installation. I found that I could right click within the primary GUI and select "Add software group", add all 3rd party applications, then go to the Advanced tab within THAT Properties screen and select Autoexecute - "Always on startup". This solved the problem for starting the apps when LiberKey starts. However, now I'm having the same issue when closing out LiberKey. I have created a new 3rd party app that calls the same .exe, but sends the Parameter "/close". I then went to the Advanced tab and selected Autoexecute - "Always on shutdown". Seems pretty logical right? But the apps will not close on LiberKey shutdown. I cannot handle the app close-outs in the same way with a software group, as I did with the startup issue because the Autoexecute drop-down does not have an "Always on shutdown" option. Unfortunately, many of the Q&A forums on liberkey.com are in French and I took Spanish in high school. Otherwise I've not been able to find a workable answer. Any suggestions?

    Read the article

  • postgresql deleteing old tables

    - by BB
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct -- DROP TABLE radacct; CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus -- DROP INDEX freesidestatus; CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx -- DROP INDEX radacct_active_user_idx; CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx -- DROP INDEX radacct_start_user_idx; CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • XP Client for NFS failure dialog on startup, but drive mapping works

    - by Matt Bennett
    I'm mounting an NFS share to some windows machines using the tools that come in the Services for UNIX Administration toolkit. I've set up the User Name Mapping service to use local passwd and group files. I had to manually start the User Name Mapping service, and then created an 'advanced map' from the XP machine's user to a uid that exists in on my NFS server, like so: Windows User: Matt Bennett UNIX Domain: PCNFS UNIX User: mattbennett UID: 10250 Primary: * I can map a network drive without any issues, and it correctly identifies the UID and GID to use, but when I reboot I get this message: "An error occurred while connecting to the NFS server. Make sure that the Client for NFS service has started. If the problem persists make sure Client for NFS service can communicate with User Name Mapping or PCNFS server." After dismissing the dialog, the machine finishes booting and the network drive is there in My Computer with the title "Disconnected Network Drive", but I can open it I can see the network share without a problem, and then it drops the 'disconnected' from its title. It seems like the services are starting in the wrong order or something, so the first attempt to connect fails but subsequent ones work as expected. There don't seem to be any symptoms apart from the dialog box, but obviously something's not quite right. What have I done wrong? Thanks, Matt.

    Read the article

  • MSSQLSERVER Will Not Start - Event ID 913 and 1814

    - by ThaKidd
    Hello ServerFault! I need some serious help. I have a major database server down and am scratching my head at how to fix it. The server was hit by rolling black outs last week in Dallas and sense then, Microsoft SQL 2005 SP2 will not start up. I am getting the following errors (both when starting the service and while trying to execute mssqlsrv.exe -c -f -m: Event Type: Error Event Source: MSSQLSERVER Event ID: 913 Could not find database ID 3. Database may not be activated yet or may be in transition. Reissue the query once the database is available. If you do not think this error is due to a database that is transitioning its state and this error continues to occur, contact your primary support provider. Please have available for review the Microsoft SQL Server error log and any additional information relevant to the circumstances when the error occurred. and... Event Type: Information Event Source: MSSQLSERVER Event ID: 1814 Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized. I have tried to rename the tempdb.mdf to tempdb.old with no success. I have checked and have 193 GB of free hard drive space. What else might cause this problem? Could the server need a chkdsk ran on it or do I need to be looking at some area of the database server? Any help is greatly appreciated. Thank you in advance.

    Read the article

  • Use Mac OS X Server As Development Environment

    - by macinjosh
    I've installed Mac OS X Server 10.6.3 on my laptop to use as my normal OS. I do a lot of web development and thought it would be handy to run OS X Server so I could more easily manage my local development environment (Apache Virtual Hosts, Hostnames for each local site, etc). I'm really enjoying the new setup except for one problem. DNS. My ideal situation would be to add a site (some-site.local) in the Web Service and then go to the DNS Service and add a primary record for the new site. I actually got this working at one point but after a reboot it stopped working! The records look the same as they did before the reboot but the site doesn't come up in Safari. Here is a list of my needs: Need to be able to add new domains at a whim Domains always map to a site on the same box's Web Service Local & External IPs often change It would nice if it worked on any network (i.e. WiFi at the airport or coffee shop) Sites only need to be accessible locally Configuration should stay put even after rebooting I've done some googling and used this as a bit of guide. In the past I've used MAMP and then just a local Apache/PHP/MySQL install with a manually managed hosts file. I'd rather not go back.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >