Search Results

Search found 89681 results on 3588 pages for 'cross server'.

Page 489/3588 | < Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >

  • Unable to communicate with EWS from Exchange Server

    - by kschieck
    We are currently running a 2 server exchange environment with Edge services on their own. We are in the process of trying to deploy a piece of software that uses the EWS API which has brought me to this form, the software ties into the EWS service and uses it to forward messages (this is failing). Using the software error logs I have found that accessing EWS from the exchange server is not possible. From my work machine and an external address I can type the following https ://webmail.companyname.com/ews/exchange.asmx and be prompted for a username and password, once I enter credentials I get a screen full of information from services.wsdl. The problem is when I try the same URL from the exchange server and get the credentials prompt I cannot get past it. Even with the same credentials that work externally and from my desk it just keeps looping around. Capture from software log (11:41:32.6415 000017e4 System.Net.WebException: The request failed with HTTP status 401: Unauthorized.) I have also found the same results when trying https://webmail.companyname.com/ Autodiscover /Autodiscover.xml . Environment Information Server 2008 STD 64bit Exchange 2007 SP1 Purchased Cert – webmail.companyname.com I have also confirmed that all services have the proper internal and external URL’s. Any help would be appreciated.

    Read the article

  • Internet slowed down because of SQUID Server setup

    - by Ranjith Kumar
    Recently I have setup a squid server for our office. I have computer (A) with two ethernet cards, one for internet and the second one for local networkIt has Ubuntu server OS with squid-server and dhcp3-server installedI have added few iptable rules to work like a router and redirect all http traffic to 3128 port This link is my reference. Everything worked fine for 2 days. All of a sudden internet speed went down drastically. When I connected the internet cable to my laptop to test the internet speed it was fine. Again when I reconnected it back to computer A everything was normal. This happened 4 times in a week. Could anyone here please help me why the internet speed is going down and it becomes normal when I reconnect the cable. EDIT: Rebooting the system (computer A) didn't make a difference. I have changed iptables so that http traffic doesn't redirect to 3128 port any further, still no change in the internet speed. I think the problem is not with squid but with something else. Here are my iptable rules SQUID_SERVER="10.1.1.1" INTERNET="eth1" LAN_IN="eth0" SQUID_PORT="3128" PROXYSERVERS=(Atlanta Baltimore Boston Chicago Dallas Denver Houston KansasCity LosAngeles Miami NewYork Philadelphia Phoenix SanAntonio SanDiego SanJose Seattle Washington) SERVERLEN=${#PROXYSERVERS[*]} I=0 iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X modprobe ip_conntrack modprobe ip_conntrack_ftp echo 1 /proc/sys/net/ipv4/ip_forward iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT while [ $I -lt $SERVERLEN ]; do iptables -t nat -A PREROUTING -i $LAN_IN -p tcp -d ${PROXYSERVERS[$I]}.wonderproxy.com --dport 80 -j ACCEPT let I++ done iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT iptables -A INPUT --protocol tcp --dport 80 -j ACCEPT iptables -A INPUT --protocol tcp --dport 443 -j ACCEPT iptables -A INPUT --protocol tcp --dport 22 -j ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP

    Read the article

  • Accessing SQL Server using an IP Address and Port Number ... Help!

    - by Mike
    I need to access an SQL Server that is on a machine behind a firewall and you access this machine using an ip address like 95.95.95.33:6930 (not the real ip address) ... But, you get my point that by accessing 95.95.95.33 on port 6930, the firewall routes the requests to that particular machine ... My question is ... How do you construct a connection string to access the machine at address 95.95.95.33:6930 and then further access the SQL Server on port 1433 or maybe a different port like 8484 ??? Thanks Mike

    Read the article

  • Error 18456. State 6 "Attempting to use an NT account name with SQL Server Authentication."

    - by Aragorn
    2010-05-06 17:21:22.30 Logon Error: 18456, Severity: 14, State: 6. 2010-05-06 17:21:22.30 Logon Login failed for user . Reason: Attempting to use an NT account name with SQL Server Authentication. [CLIENT: ] The authentication mode is "Mixed". And its MS SQL Server 2008. What might be the issue? Do you think the user name was not configured properly? Is there any link available for giving the right privileges and configuring the user account? So that I can check the rights and privileges for the acc I am using... thanks

    Read the article

  • Suggestions for accessing SQL Server from internet

    - by Ian Boyd
    i need to be able to access a customer's SQL Server, and ideally their entire LAN, remotely. They have a firewall/router, but the guy responsible for it is unwilling to open ports for SQL Server, and is unable to support PPTP forwarding. The admin did open VNC, on a non-stanrdard port, but since they have a dynamic IP it is difficult to find them all the time. In the past i have created a VPN connection that connects back to our network. But that didn't work so well, since when i need access i have to ask the computer-phobic users to double-click the icon and press Connect i did try creating a scheduled task that attempts to keep the VPN connection back to our office up at all times by running: >rasdial "vpn to me" But after a few months the VPN connection went insane, and thought it was both, and neither, connected an disconnected; and the vpn connection wouldn't work again until the server was rebooted. Can anyone think of a way where i can access the customer's LAN that doesn't involve opening ports on the router needing to know their external IP customer interaction of any kind Blah blah blah use vpn vnc protocol has known weaknesses you are unwise to lower your defenses it's not wise to expose SQL Server directly to the internet you stole that line from Empire Customer doesn't care about any of that. Customer wants things to work.

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • Use SharePoint Search to crawl Project Server project metadata?

    - by Kit Menke
    Our environment consists of Project Server 2007 and MOSS 2007. We have around 750 projects and lots of "Enterprise Custom Fields" set up to track all of the metadata associated with a project. Our main requirement is to be able to search/filter/group/sort all of these projects by metadata in SharePoint. Our current process involves syncing this custom metadata into a SharePoint list (which requires a LOT of maintenance). Question: Is it possible to leverage SharePoint search to crawl/index these metadata fields in Project Server? How would I go about setting this up?

    Read the article

  • Migrateing to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows2k3 Domain Controllers. I am trying to replace them with Windows 2008 R2. The Win2k3 servers are DC01 and DC02. The Windows2k8 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller That Runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I dcpromo'd DC1 using the advanced option and added it successfully to my exisiting domain. Roles are GC, DNS and Active Directory Domain Services.I transferred The PDC, RID pool manager and Infrastructure master FSMO to the new DC.(DC1) The Schema master and Domain naming master are still on the old DC (DC01). The first issue I'm encountering is when i dcpromo the second DC (DC2) and select "Replicate data over the network from and existing domain controller" I select the new DC to replicate from (DC1) I get the following error: "Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request." Is this because the Schema master and Domain naming master roles are still on the old DC (DC01)? And if so, if I transfer Schema master and Domain naming master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way. Thanks in Advance -Chris

    Read the article

  • How to make working TFTP server on CentOS 6.2

    - by Dima
    I'm trying to setup TFTP server on CentOS 6.2. The /etc/xinet.d/tftp configuration file is the following: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /tftpboot -vvv per_source = 11 cps = 100 2 flags = IPv4 } The selinux and firewall are disabled. The /etc/hosts.allow and /etc/hosts.deny files are empty. When I'm trying to get a file from the TFTP server, the file transfer always failed and I see the following errors into /var/log/messages Jul 11 03:16:53 localhost xinetd[4155]: xinetd Version 2.3.14 started with libwrap loadavg labeled-networking options compiled in. Jul 11 03:16:53 localhost xinetd[4155]: Started working: 1 available service Jul 11 03:17:00 localhost xinetd[4155]: START: tftp pid=4157 from=192.168.10.3 Jul 11 03:17:00 localhost in.tftpd[4158]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:00 localhost in.tftpd[4158]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:01 localhost in.tftpd[4159]: RRQ from 192.168.10.3 filename 1 Jul 11 03:17:01 localhost in.tftpd[4159]: sending NAK (0, Permission denied) to 192.168.10.3 Jul 11 03:17:03 localhost in.tftpd[4160]: RRQ from 192.168.10.3 filename 1 The tftpboot directory permissions are (output of the ls -l command): drw-rw-rw-. 3 root root 4096 Jul 11 03:32 tftpboot I also see that the tftpboot directory is shown (by ls -l) with green background (unlike other files/directories) (Why? As I know the green background is for sticky bit only). What I did wrong? How can I make TFTP server working?

    Read the article

  • Local DNS and Apache Server Configuration Interferring - example.com / www.example.com

    - by nicorellius
    I have a domain for my site: example.com I am also running local DNS with these lines: www IN CNAME server.<host_provider>.com. dev IN CNAME server.<host_provider>.com. So www.example.com and dev.example.com go to production and development sites, respectively, that are hosted by a host company. In my Apache configuration for the main site, I'm running a rewrite rule like this: RewriteEngine ON RewriteCond %{HTTP_HOST} ^example\.com$|!dev\.example\.com$ [NC] RewriteRule ^(.*)$ http://www\.%{HTTP_HOST}/$1 [R=302,L,NE] This rule seems to work, as when you are off the network and go to example.com in the browser, you get redirected to www.example.com. The problem is when I'm on the network, and I go to example.com I get an error page, saying page can't be found. No server errors; just a page can't be found, as if the local DNS is causing it to stop looking at that point. I'm also using Nettica for DNS service and have this A record in place: example.com Host (A) Default xxx.xx.xxx.xx This handles the external DNS, but my problem seems to be related to my internal DNS. For example, inside my network, I can go to servers on the network with addresses like this: server.example.com server1.example.com server2.example.com These are configured in my local DNS. I'm just not sure how to get past the "empty" subdomain and go to example.com. Adding to this since it might not be clear. If I'm out side the example.com network, on another network, like example123.com, then when I go to example.com I'm redirected to www.example.com as expected, eg, the Apache rewrite rule is working. Thanks in advance for any information.

    Read the article

  • IIS6 Virtual SMTP server isn't coming back up automatically after a system restart

    - by Julian James
    I've got a virtual server running Win2008 RC2. I've set up IIS6 with a virtual SMTP server on it to be the mail provider for the websites I'm hosting there. It all works great, but if for some reason the server reboots (auto updates are still enabled - I'm trying to make this as little work as possible as we've got a Lot of clients), the IIS6 doesn't restart the SMTP server. The failure causes 500 errors on the current setup, so I'm spending half the day apologising. Any ideas? In Services I've set everything to come back up automatically, but still no dice. As soon as I restart the SMTP, no problems, all the mail gets sent. It's working perfectly, it just won't restart on it's own. I'd really rather not turn auto updates off as we're such a small company I just can't spare the time to be manually updating 15 copies of windows every time MS decide there's a security patch. All advice appreciated! BTW, I am a complete newb to these forums. I searched but couldn't find an answer, so please be nice. But firm. I've got to learn here.

    Read the article

  • How to set CA cert file for LDAP backend server in smbpasswd configuration

    - by hayalci
    I am having a problem with smbpasswd, an LDAP backend server and SSL/TLS certificates. The client machine that I run smbpasswd on is a Debian Etch machine, and the Ldap server is Sun DS running on Solaris. All the following occurs on the client. When I disable SSL, by setting "ldap ssl = no" in smb.conf, the smbpasswd program works without errors. When I set "ldap ssl = start tls", the following messages are printed by smbpasswd and there is a long timeout period before any password is asked by it Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! ..... long delay ..... New SMB password: Retype new SMB password: Failed to issue the StartTLS instruction: Connect error Connection to LDAP server failed for the 1 try! smbpasswd: /tmp/buildd/openldap2-2.1.30/libraries/liblber/io.c:702: ber_get_next: Assertion `0' failed. Aborted I conducted some tests with "ldapsearch -ZZ". It was not working at first, but after I added the TLS_CACERT line to /etc/ldap/ldap.conf, /etc/libnss-ldap.conf and /etc/pam_ldap.conf, it started working. So relevant TLS sections in all those files are: ssl start_tls tls_checkpeer no tls_cacertfile /path/to/ca-root.pem TLS_CACERT /path/to/ca-root.pem But the smbpasswd program continued giving the error. I tried creating /etc/smbldap-tools/smbldap.conf file with following content (after consulting debian docs for smbldap-tools package) But as I see, smbpasswd comes with samba-common package and does not use the configuration for smbldap-tools utilities. verify="optional" cafile="/path/to/ca-root.pem" My question is: How can I set which SSL CA Certificate is used by smbpasswd program ?

    Read the article

  • Download JDK onto a remote server

    - by itsadok
    I want to get the latest JDK onto a server in a remote location. Downloading the JDK from Sun's website requires jumping through all kinds of hoops until you actually get the file. I'm not sure exactly if they use cookies or my IP address, but simply copying the file URL and trying wget on the server doesn't work. Googling for mirrors of the JDK, I could only find old versions. Right now I'm left with the option of downloading it into my computer, then uploading it to the server. This feels slow and stupid. Anyone got a better idea? EDIT: Thanks for all the replies. Just to clarify, as I'm writing this I'm rsyncing the 78MB file to my server. It should be done in about an hour, so it's not such a big deal. However, since this is not the first time I'm doing this, I was hoping for a better solution for next time. Solution: What I ended up doing was sudo aptitude install lynx-cur www-browser http://java.sun.com/javase/downloads/ From there it's mostly using the arrow and enter keys, and answering "Yes" to a lot of lynx security questions (about cookies and certificates). Thanks to resonator.

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Thin server : `start_tcp_server': no acceptor (port is in use or requires root privileges) (RuntimeError)

    - by Rubytastic
    My thin webserver fails to start with an error message. I can hardly find any information or leads on how to fix this, anyone an idea? thx Thin web server (v1.5.0 codename Knife) Maximum connections set to 1024 Listening on 0.0.0.0:9292, CTRL+C to stop /srv/gamers/shared/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:526:in start_tcp_server': no acceptor (port is in use or requires root privileges) (RuntimeError) from /srv/gamers/shared/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:526:instart_server' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/thin-1.5.0/lib/thin/backends/tcp_server.rb:16:in connect' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/thin-1.5.0/lib/thin/backends/base.rb:55:inblock in start' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:in call' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:inrun_machine' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/eventmachine-1.0.0/lib/eventmachine.rb:187:in run' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/thin-1.5.0/lib/thin/backends/base.rb:63:instart' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/thin-1.5.0/lib/thin/server.rb:159:in start' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/handler/thin.rb:13:inrun' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/server.rb:265:in start' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/server.rb:137:instart' from /srv/gamers/shared/bundle/ruby/1.9.1/gems/rack-1.4.1/bin/rackup:4:in <top (required)>' from /srv/gamers/shared/bundle/ruby/1.9.1/bin/rackup:19:inload' from /srv/gamers/shared/bundle/ruby/1.9.1/bin/rackup:19:in `'

    Read the article

  • MySQL Server hitting 100% unexpectedly (Amazon AWS RDS)

    - by Luc
    Please help! We've been struggling with this one for months. This week we upped our RDS instance to the highest performing instance and although the occurrences have reduced, we're still having our DB all of a sudden hit 100%. It comes out of nowhere. Sometimes 2am, sometimes midday. I've ruled out a DOS - our pages access logs have normal traffic I've ruled out memcached suddenly dieing (hits and misses continue as normal). The SHOW PROCESSLIST while we have issues reports about 500 queries in queue. If I kill them off or restart the server, they just keep coming back and then eventually out of knowhere, our server resumes back to normal. Sometimes up to 3 hours. Our bad performing queries take .02 seconds to execute when the server eventually returns back to normal but while we're in this 100% CPU physco phase, those queries never finish executing. Please help!!!!! Anybody know anything about MYSQL query optimization? Could it be the server deciding to use different indexes all of a sudden, which puts it into a spiral?

    Read the article

  • What tools are people using to measure SQL Server database performance?

    - by Paul McLoughlin
    I've experimented with a number of techniques for monitoring the health of our SQL Servers, ranging from using the Management Data Warehouse functionality built into SQL Server 2008, through other commercial products such as Confio Ignite 8 and also of course rolling my own solution using perfmon, performance counters and collecting of various information from the dynamic management views and functions. What I am finding is that whilst each of these approaches has its own associated strengths, they all have associated weaknesses too. I feel that to actually get people within the organisation to take the monitoring of SQL Server performance seriously whatever solution we roll out has to be very simple and quick to use, must provide some form of a dashboard, and the act of monitoring must have minimal impact on the production databases (and perhaps even more importantly, it must be possible to prove that this is the case). So I'm interested to hear what others are using for this task? Any recommendations?

    Read the article

  • Is there a SQL Server error numbers C# wrapper anyone knows of?

    - by Mr Grok
    I really want to do something useful when a PK violation occurs but I hate trapping error numbers... they just don't read right without comments (they're certainly not self documenting). I know I can find all the potential error numbers at SQL Server books online but I really want to be able to pass the error number to some helper class or look it up against a Dictionary of some sort rather than have non-descript err numbers everywhere. Has anyone got / seen any code anywhere that encapsulates the SQL Server Error numbers in this way as I don't want to re-invent the wheel (or I'm lazy maybe).

    Read the article

  • How does SQL Server treat statements inside stored procedures with respect to transactions?

    - by Sleepless
    Hi All! Say I have a stored procedure consisting of several seperate SELECT, INSERT, UPDATE and DELETE statements. There is no explicit BEGIN TRANS / COMMIT TRANS / ROLLBACK TRANS logic. How will SQL Server handle this stored procedure transaction-wise? Will there be an implicit connection for each statement? Or will there be one transaction for the stored procedure? Also, how could I have found this out on my own using T-SQL and / or SQL Server Management Studio? Thanks!

    Read the article

  • erlyvideo server doesn't start automatically after reboot

    - by electroid
    I have installed erlyvideo server on ubuntu 9.10 karmic koala. Everything works fine, but after server reboot I have to start erlyvideo server manually with /etc/init.d/erlyvideo start. I try allready update-rc.d and I think erlyvideo by default should start automaticaly. Any help will be appreciated. Here erlyvideo startup script located in /etc/init.d/erlyvideo: #!/bin/sh ### BEGIN INIT INFO # Provides: erlyvideo # Required-Start: $local_fs $network # Required-Stop: $local_fs $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the erlyvideo streaming server # Description: starts the erlyvideo using erlang system ### END INIT INFO case "$1" in start) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; stop) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; soft-restart) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; upgrade) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reconfigure) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; reboot) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; ping) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; console) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach) cd /opt/erlyvideo && ./bin/erlyvideo "$1" ;; attach-erl) cd /opt/erlyvideo && ./erts-5.8.4/bin/erl -name [email protected] -remsh [email protected] ;; *) echo $"Usage: $0 {start|stop|restart|soft-restart|upgrade|reboot|ping|console|attach}" exit 1 esac exit 0 And I have found S91erlyvideo in /etc/rc2.d next to S91apache2 which starts just fine on every reboot.

    Read the article

  • Why is sql server giving a conversion error when submitting date.today to a datetime column?

    - by kpierce8
    I am getting a conversion error every time I try to submit a date value to sql server. The column in sql server is a datetime and in vb I'm using Date.today to pass to my parameterized query. I keep getting a sql exception Conversion failed when converting datetime from character string. Here's the code Public Sub ResetOrder(ByVal connectionString As String) Dim strSQL As String Dim cn As New SqlConnection(connectionString) cn.Open() strSQL = "DELETE Tasks WHERE ProjID = @ProjectID" Dim cmd As New SqlCommand(strSQL, cn) cmd.Parameters.AddWithValue("ProjectID", 5) cmd.ExecuteNonQuery() strSQL = "INSERT INTO Tasks (ProjID, DueDate, TaskName) VALUES " & _ " (@ProjID, @TaskName, @DueDate)" Dim cmd2 As New SqlCommand(strSQL, cn) cmd2.CommandText = strSQL cmd2.Parameters.AddWithValue("ProjID", 5) cmd2.Parameters.AddWithValue("DueDate", Date.Today) cmd2.Parameters.AddWithValue("TaskName", "bob") cmd2.ExecuteNonQuery() cn.Close() DataGridView1.DataSource = ds.Projects DataGridView2.DataSource = ds.Tasks End Sub Any thoughts would be greatly appreciated.

    Read the article

< Previous Page | 485 486 487 488 489 490 491 492 493 494 495 496  | Next Page >