Search Results

Search found 87589 results on 3504 pages for 'veritas cluster server'.

Page 665/3504 | < Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >

  • Problem inserting in two different tables [closed]

    - by imvarunkmr
    I have written an insert statement which inserts a record into Table1. Table1 has a column "ID" which is an auto_increment(Identity) primary key. How can I fetch the newly generated "ID" and as I need to Insert this value as foreign key in Table2? Note : I have written INSERT statement in a stored procedure and I am calling this procedure using C# Alternative suggestions to link both tables are also welcomed :)

    Read the article

  • ???????????????!??????????????

    - by Yusuke.Yamamoto
    ????? ??:2010/12/01 ??:?????? ????????????????????????????????????????????????????????????????IT??????????????????????????????????????????????????????????????????????????????????????????????Oracle Enterprise Manager 11g????????????????????? ????/ Oracle VM?Enterprise Manager?????????/ ????????????????????????????????????????????/ ???????????????&?????????????????/?????????????????????????????????????????? ????????? ????????????????? http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/20101201_EM_VM_Managementv2.pdf

    Read the article

  • Using unixODBC to connect to Oracle server

    - by Paul
    I am trying to configure our web server (RHEL 5.4 x86) to connect to an Oracle database using unixODBC. I have installed unixODBC-2.2.11-7.1.1, which yum tells me is the latest version. I have also installed the Oracle InstantClient 11.2 and the Oracle InstantClient ODBC library. I have symlinked the all the .so files in /usr/lib/oracle/11.2/client/lib to /usr/lib. I have set $LD_LIBRARY_PATH to /usr/lib/, $ORACLE_HOME to /usr/lib/oracle and $TNS_ADMIN to the directory containing my (valid) Tnsnames.ora file. Here are the contents of my /etc/odbcinst.ini file: [Oracle] Description = Oracle ODBC Connection Driver = /usr/lib/libsqora.so.11.1 Setup = FileUsage = and my /etc/odbc.ini file: [Oracle] Application Attributes = T Attributes = W BatchAutocommitMode = IfAllSuccessful CloseCursor = F DisableDPM = F DisableMTS = T Driver = Oracle EXECSchemaOpt = EXECSyntax = T Failover = T FailoverDelay = 10 FailoverRetryCount = 10 FetchBufferSize = 64000 ForceWCHAR = F Lobs = T Longs = T MetadataIdDefault = F QueryTimeout = T ResultSets = T ServerName = //<host>:<port>/<db> SQLGetData extensions = F Translation DLL = Translation Option = 0 UserID = (ServerName has been edited...host, port, and db are actually there, and correct) When I run isql I get $ isql -v Oracle isql: symbol lookup error: /usr/lib/libsqora.so.11.1: undefined symbol: SQLGetPrivateProfileStringW And running dltest gives me $ dltest Oracle SQLConnect [dltest] ERROR dlopen: Oracle: cannot open shared object file: No such file or directory If anyone has any insights I would be grateful, I've been trying to get this to connect for about 5 hours now... I am going home for the night, but will gladly provide more details, if necessary, tomorrow morning, to anyone willing to help...

    Read the article

  • How to troubleshoot web server lock-up (Debian Squeeze)

    - by Ryan
    Every once in a while, my web server slows so significantly, it seems locked up. Can't SSH in, no sites being served. It's a VPS that started out as Debian 5 which I upgraded to testing (squeeze). It's a typical LAMP set-up with the sole purpose of running a couple of wordpress sites. One time when it locked up, I got to one of the sites, but it was wordpress complaining it couldn't establish a database connection. So it seemed as if something was really chewing up the CPU and mysqld either timed out, or possibly failed and couldn't restart. But since I couldn't SSH in I feel more inclined to attribute it to CPU. But the only processes running now, aside from OS and kernel stuff: apache mysqld python (for fail2ban) sshd exim4 It has 512M of RAM and 1.5 GB of swap. Every time I check on it, it has plenty of free memory and is using virtually no swap (usually 2-3M). And since I am running fail2ban I don't think I'm getting ddosed. I did find this in my logwatch email this morning (it locked up late last night, when there would have been very little traffic): 6 Time(s): [<ffffffff810a0ebc>] ? oom_kill_process+0x7e/0x23d 6 Time(s): [<ffffffff810a1505>] ? __out_of_memory+0x12a/0x141 6 Time(s): [<ffffffff810a1586>] ? out_of_memory+0x6a/0x94 I didn't find anything else suspicious. It can't be my provider's host because I can SSH in and restart the VM, and everything seems fine. Anybody know which logs I should start poring through to find the core of my problem? Thanks guys.

    Read the article

  • Problems migrating software RAID 5 to new server (linux)

    - by leleu
    I have a CentOS setup with sw RAID5 that holds my data. Well, the server died, so I bought another box to migrate my drives to. Only thing is, I cannot get the RAID array rebuilt (not even sure it needs rebuilding, might just need the /dev/md0 mapping created... but I don't even know how to determine what I need!) Some details: RAID5 software (used mdadm) 4x 250GB drives (2 are SATA, 2 are EIDE -- would this matter? It worked fine in the other box...) latest CentOS distro built using mdadm I've got a decent amount of experience with standard linux stuff, but the hardware level stuff runs me in circles. I've spent some time googling and elsewhere here on SF, so please be kind for my newbie questions :). My question is this: how can I diagnose the problem? For all I know, I'm using the wrong device blocks when I try to rebuild the array, but I can't find the command to display only the devices that have some physical attachment. Is there some simple way for me to run mdadm, having it scan over all my physical drives, and say "hey, drives 2,5,6,7 are a software array, want me to mount it?" I basically just took the drives from my old box and put it into my new one. They show up in the BIOS. What steps do I need to take in order to get the array up, running, and mounted? Thanks in advance!

    Read the article

  • How do I make my internal dns forward requests to a given server

    - by ankimal
    We have a DNS server internally that looks up IP addresses for all internal hosts and connects to root dns servers for all other domains (the rest of the internet). Here is my config options { listen-on port 53 { 127.0.0.1;any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {192.168.1.0/24; 127.0.0.1; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view “internal” { // What the home network will see match-clients { 127.0.0.1;any; }; match-destinations { 127.0.0.1;any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "internal_zones.conf"; }; We need to tweak this to go to our ISPs dns, x.y.z.w instead of the root dns servers if the host cannot be resolved internally. Config: Fedora 10/Bind 9.5.2

    Read the article

  • Per client DNS server assignment using Pfsense

    - by Trix
    I have a network where pfsense is the gateway. There are two sets of clients that I want. One where there will be some restrictions to the network (example, IM being blocked) and one network where there are no restrictions. One easy way I thought about doing this was assigning the different domains different DNS servers. One set could use OpenDNS, the other could use Google's Public DNS. The set with OpenDNS would have the filter options on (using OpenDNS' dashboard, I can check block IM .... so I do not manually need to block login.oscar.aol.com, meebo.com, gmail chat ....etc). So the problem is the DHCP server looks like it will only assign a single set of DNS servers to clients. Is there a way to set a per client assignment? Is there a better way to obtain what I want to obtain. This is just a small home network. I do not need anything fancy, but I do need this functionality in one way or another.

    Read the article

  • Exim log and send all mail for a given domain through another server

    - by Josh
    I administer a handful of shared web hosting servers. Recently, Yahoo has been deprioritizing/greylising all email sent from these servers. I am getting the dereaded 421 4.7.0 [TS02] Messages from my.ip.address temporarily deferred message from Yahoo and their postmaster has been unresponsive. I am unable to find any way to set up a feedback loop like AOL has for my IP address -- I did find a way to set up a feedback loop for a given domain, but we host hundreds of domains, and don't have the time to set up that many feedback loops. So what I'd like to do is twofold: Configure Exim to send all email destined to an @yahoo.com address to a relay, a new server which has an IP that yahoo is not blocking. Configure Exim (or maybe the relay) to log all emails sent to @yahoo.com, so I can review them and, in case one of my uses is violating ToS and sending SPAM to yahoo users, take the appropriate action. How could I accomplish these? Or, does anyone have any other advice for how to get mail to flow through Yahoo and ensure that any email generating complaints is brought to my attention? (For what it's worth, these servers are not listed on any major blacklists)

    Read the article

  • How to get html/css/jpg pages server by both apache & tomcat with mod_jk

    - by user53864
    I've apache2 and tomcat6 both running on port 80 with mod_jk setup on ubutnu servers. I had to setup an error document 503 ErrorDocument 503 /maintenance.html in the apache configuration and somehow I managed to get it work and the error page is server by apache when tomcat is stopped. Developers created a good looking error page(an html page which calls css and jpg) and I'm asked to get this page served by apache when tomcat is down. When I tried with JkUnMount /*.css in the virtual hosting, the actual tomcat jsp pages didn't work properly(lost the format) as the tomcat applications uses jsp, css, js, jpg and so on. I'm trying if it is possible to get .css and .jpg served by both apache and tomcat so that when the tomcat is down I'll get css and jpg serverd by apache and the proper error document is served. Anyone has any technique? Here is my apache2 configuration: vim /etc/apache2/apache2.conf Alias / /var/www/ ErrorDocument 503 /maintenance.html ErrorDocument 404 /maintenance.html JkMount / myworker JkMount /* myworker JkMount /*.jsp myworker JkUnMount /*.html myworker <VirtualHost *:80> ServerName station1.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps1 JkMount /* myworker JkUnMount /*.html myworker </VirtualHost> <VirtualHost *:80> ServerName station2.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps2 JkMount /* myworker JkMount /*.html myworker </VirtualHost>

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • DNS Server on Fedora 11

    - by Funky Si
    I recently upgraded my Fedora 10 server to Fedora 11 and am getting the following error in my DNS/named config. named[27685]: not insecure resolving 'fedoraproject.org/A/IN: 212.104.130.65#53 This only shows for certain addresses some are resolved fine and I can ping and browse to them fine, while others produce the error above. This is my named.conf file acl trusted-servers { 192.168.1.10; }; options { directory "/var/named"; forwarders {212.104.130.9 ; 212.104.130.65; }; forward only; allow-transfer { 127.0.0.1; }; # dnssec-enable yes; # dnssec-validation yes; # dnssec-lookaside . trust-anchor dlv.isc.org.; }; # Forward Zone for hughes.lan domain zone "funkygoth" IN { type master; file "funkygoth.zone"; allow-transfer { trusted-servers; }; }; # Reverse Zone for hughes.lan domain zone "1.168.192.in-addr.arpa" IN { type master; file "1.168.192.zone"; }; include "/etc/named.dnssec.keys"; include "/etc/pki/dnssec-keys/dlv/dlv.isc.org.conf"; include "/etc/pki/dnssec-keys//named.dnssec.keys"; include "/etc/pki/dnssec-keys//dlv/dlv.isc.org.conf"; Anyone know what I have set wrong here?

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • Unable to connect remotely to Vsftpd server set up on CentOS VirtualBox

    - by ryekayo
    I have set up a Vsftp server using the following instructions provided Here and even went as far as following the commentary at the bottom. But I am unable to connect remotely. When I attempt to use FileZilla or my Ubuntu terminal, I always get: ryan@ryan-Galago-UltraPro:~$ ftp 10.0.x.xx ftp: connect: Connection timed out ftp> I have checked and re-checked iptables conf file and made sure that Port 21 is being Accepted and it is. I have looked this up on the web and decided to try nmap to port scan it and this is what I get for a result: ryan@ryan-Galago-UltraPro:~$ nmap -PN 10.0.xx.xx Starting Nmap 6.40 ( http://nmap.org ) at 2014-08-19 15:01 EDT Nmap scan report for 10.0.xx.xx Host is up. All 1000 scanned ports on 10.0.xx.xx are filtered Nmap done: 1 IP address (1 host up) scanned in 201.38 seconds Is there anything else that I should do or check for? UPDATE: I have tried to ping from the virtual machine to my IP address on Ubuntu and have been successfully able to. I cannot ping to my virtual machine from Ubuntu. I have narrowed this down to possibly being a firewall related issue on Ubuntu's side, but why would I be unable to connect from FileZilla?

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • Package upgrade on Ubuntu raid server and grub setup issue

    - by RecNes
    I have remote Ubuntu 10.10 server running on raid system. I did package upgrade yesterday night for security reasons. During the upgrade, grub installation screen appeared and asked me which partition I wanted to install grub. Options are sda,sdb,md1 and md2. I decide to install them on both sda and sdb partitions. I wondering, was I make true decision? If machine get reboot is it can be boot up safely? You can find fdisk output and fstab mount points below: Fstab: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md0 none swap sw 0 0 /dev/md1 /boot ext3 defaults 0 0 /dev/md2 / ext3 defaults 0 0 Fdisk: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029bb5 Device Boot Start End Blocks Id System /dev/sda1 1 262 2102562 fd Linux raid autodetect /dev/sda2 263 295 265072+ fd Linux raid autodetect /dev/sda3 296 91201 730202445 fd Linux raid autodetect Disk /dev/md0: 2152 MB, 2152923136 bytes 2 heads, 4 sectors/track, 525616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 271 MB, 271319040 bytes 2 heads, 4 sectors/track, 66240 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md2: 747.7 GB, 747727224832 bytes 2 heads, 4 sectors/track, 182550592 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00088969 Device Boot Start End Blocks Id System /dev/sdb1 1 262 2102562 fd Linux raid autodetect /dev/sdb2 263 295 265072+ fd Linux raid autodetect /dev/sdb3 296 91201 730202445 fd Linux raid autodetect

    Read the article

  • Server hang - data loss on reboot, post mortem analysis

    - by rovangju
    A development server I'm responsible for (ext3 on raid 5 w/Debian Squeeze) froze up over the weekend and I was forced to reset it, as in unresponsive from KVM/physical keyboard access, no eth devices responding, etc. Not even the backup process ran (Figures, the one time I don't check for confirmation) So after the reset, it turns out that every trace of disk IO activity that should have happened for a period of ~24H is completely gone. The log files have a big gap in the dates and times. As if the writes were never committed to disk, no processes seemed to have run. Luckily it was a weekend and nothing of value would have been lost and I don't suspect a hack. What can I do in post mortem to this event - to prevent it from ever happening again? I've seen this happen before on a completely different machine running FreeBSD. I am rounding up the disk checking tools right now - but there must be more going on! Mount options: /dev/sda1 on / type ext3 (rw,errors=remount-ro) Kernel: Linux dev 2.6.32-5-686-bigmem Disk/Inodes: 13%/3%

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

  • Samba between Ubuntu server 10.10 and Windows Vista, Windows 7

    - by chepukha
    Hi all, I have a linux box running Linux server ubuntu 10.10. I have installed Samba on this linux box and want to share files with my laptops which run Windows Vista home and Windows 7 home. I have been struggling with the setup for almost a month but couldn't get it right. If I try to access share folder from Windows Vista, I get message "Windows cannot access \\server_ip_address". Error code: 0x80070035. The network path was not found. If I access from Windows 7, then after entering password to login I can see the list of share folders on Linux box. But if I click on a share folder, I get the same error message as above. Tail /var/log/samba/log.windows7-pc I got the following message: [2011/03/16 00:17:41.427238, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service sharemedia, path /root/sharemedia Here is my setting in smb.conf [global] share modes = yes netbios name = Samba workgroup = WORKGROUP wins support = yes encrypt passwords = true [sharemedia] comment = Tesing sharing using Samba path=/root/sharemedia/ public = yes valid users = samba_usr_name ; make sure all files are sensible permissions create mask = 0660 force create mask = 0660 directory mask = 2770 force directory mask = 2770 directory security mask = 0000 ; Normal share parameters read only = no browseable = yes writable = yes guest ok = no

    Read the article

  • rsync over ssh backup failing after relocation of server

    - by OlduvaiHand
    I've got two FreeBSD machines set up; one serves video data and the other is the backup for the first. At this point I've got around 4TB of data. I add files to the video server a few at a time, and was planning to use rsync over ssh to keep the backup machine up to date. I did the initial, large backup with both machines hooked up to the same subnet at the lab with no problems using rsync. Then, when I moved the backup machine off-site (but still on the university network), I attempted a sync without changing anything other than the IP (as the machine is now on a different subnet) and got the following error: 2010/03/22 15:55:21 [1260] rsync: connection unexpectedly closed (6340840244 bytes received so far) [receiver] 2010/03/22 15:55:21 [1260] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [receiver=3.0.7] 2010/03/22 15:55:21 [1258] rsync: connection unexpectedly closed (60 bytes received so far) [generator] 2010/03/22 15:55:21 [1258] rsync error: unexplained error (code 255) at io.c(601) [generator=3.0.7] The script that handles the backup hasn't been changed, nor has the crontab that invokes it. Does anyone have any ideas about what might be causing the hiccup? I was under the impression that it might have something to do with the ssh connection timing out or something along those lines, but am not entirely clear on how to diagnose the cause of the problem.

    Read the article

  • Rails 3 passenger nginx application spawner server error on Synology NAS

    - by peresleguine
    Question updated, please read UPD2. I'm trying to deploy app through passenger nginx module on DS710+ (ruby 1.9.2p0 installed). There is syntax error relative to has_and_belongs_to_many_association.rb file. Please look at the screenshot(deleted, question updated). I'm pretty sure the problem isn't in library file. App is running good via webrick. Could you please advise what to look for? UPD1 ruby -v ruby 1.9.2p0 (2010-08-18 revision 29036) [i686-linux] gem list -d passenger *** LOCAL GEMS *** passenger (3.0.6) Author: Phusion - http://www.phusion.nl/ Rubyforge: http://rubyforge.org/projects/passenger Homepage: http://www.modrails.com/ Installed at: /usr/lib/ruby/gems/1.9.1 Easy and robust Ruby web application deployment UPD2 I've decided to reinstall everything. It solved previous problem but caused another one. The error is: The application spawner server exited unexpectedly: Unexpected end-of-file detected. Here is screenshot. New output: ruby -v ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux] gem list -d passenger *** LOCAL GEMS *** passenger (3.0.7) Author: Phusion - http://www.phusion.nl/ Rubyforge: http://rubyforge.org/projects/passenger Homepage: http://www.modrails.com/ Installed at: /usr/lib/ruby/gems/1.9.1 Nginx error.log: [ pid=5653 thr=32771 file=ext/common/Watchdog.cpp:128 time=2011-04-20 14:08:34.505 ]: waitpid() on Phusion Passenger helper agent return -1 with errno = ECHILD, falling back to kill polling [ pid=5654 thr=49156 file=ext/common/Watchdog.cpp:128 time=2011-04-20 14:08:34.506 ]: waitpid() on Phusion Passenger logging agent return -1 with errno = ECHILD, falling back to kill polling 2011/04/20 14:12:33 [notice] 7614#0: signal process started

    Read the article

  • Managing a test iSCSI target server

    - by dyasny
    Hi all, I am using a RHEL server with a few hard drives, and tgtd as the iscsi target software. I a looking for a way to allocate and deallocate space and targets with that space, without restarting my system, or harming other LUNs. Currently, all my HDDs are PVs in a single VG, and I lvcreate/lvremove as required, and then export the allocated LVs using a tgt script: usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2001-04.com.lab.gss:300gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_300Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=2 --targetname iqn.2001-04.com.lab.gss:200gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_200Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=3 --targetname iqn.2001-04.com.lab.gss:100gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_100Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL tgtadm --mode target --op show So in order to remove a LUN, I stop the tgtd service, lvremove the lv, and remove the entry from the iscsi target script When I add a lun, I run lvcreate, and then add an entry to the script and run it. This is not quite optimal, since restarting the service is a bad idea while other LUNs are busy, so I am looking for a more scalable and safer way. Thanks

    Read the article

  • Issue in setting up VPN connection (IKEv1) using android (ICS vpn client) with Strongswan 4.5.0 server

    - by Kushagra Bhatnagar
    I am facing issues in setting up VPN connection(IKEv1) using android (ICS vpn client) and Strongswan 4.5.0 server. Below is the set up: Strongswan server is running on ubuntu linux machine which is connected to some wifi hotspot. Using the steps in this guide link, I generated CA, server and client certificate. Once certificates are generated, following (clientCert.p12 and caCert.pem) are sent to mobile via mail and installed on android device. Below are the ip addresses assigned to various interfaces Linux server wlan0 interface ip where server is running: 192.168.43.212, android device eth0 interface ip address: 192.168.43.62; Android device is also attached with the same wifi hotspot. On the Android device, I uses IPsec Xauth RSA option for setting up VPN authentication configuration. I am using the following ipsec.conf configuration: # basic configuration config setup plutodebug=all # crlcheckinterval=600 # strictcrlpolicy=yes # cachecrls=yes nat_traversal=yes # charonstart=yes plutostart=yes # Add connections here. # Sample VPN connections conn ios1 keyexchange=ikev1 authby=xauthrsasig xauth=server left=%defaultroute leftsubnet=0.0.0.0/0 leftfirewall=yes leftcert=serverCert.pem right=192.168.43.62 rightsubnet=10.0.0.0/24 rightsourceip=10.0.0.2 rightcert=clientCert.pem pfs=no auto=add      With the above configurations when I enable VPN on android device, VPN connection is not successful and it gets timed out in Authentication phase. I ran wireshark on both the android device and strongswan server, from the tcpdump below are the observations. Initially Identity Protection (Main mode) exchanges happens between device and server and all are successful. After all successful Identity Protection (Main mode) exchanges server is sending Transaction (Config mode) to device. In reply android device is sending Informational message instead of Transaction (Config mode) message. Further server is keep on sending Transaction (Config mode) message and device is again sending Identity Protection (Main mode) messages. Finally timeout happens and connection fails. I also capture Strongswan server logs and below are the snippets from the server logs which also verifies the same(described above). Apr 27 21:09:40 Linux pluto[12105]: | **parse ISAKMP Message: Apr 27 21:09:40 Linux pluto[12105]: | initiator cookie: Apr 27 21:09:40 Linux pluto[12105]: | 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | responder cookie: Apr 27 21:09:40 Linux pluto[12105]: | 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | next payload type: ISAKMP_NEXT_HASH Apr 27 21:09:40 Linux pluto[12105]: | ISAKMP version: ISAKMP Version 1.0 Apr 27 21:09:40 Linux pluto[12105]: | exchange type: ISAKMP_XCHG_INFO Apr 27 21:09:40 Linux pluto[12105]: | flags: ISAKMP_FLAG_ENCRYPTION Apr 27 21:09:40 Linux pluto[12105]: | message ID: a2 80 ad 82 Apr 27 21:09:40 Linux pluto[12105]: | length: 92 Apr 27 21:09:40 Linux pluto[12105]: | ICOOKIE: 06 fd 61 b8 86 82 df ed Apr 27 21:09:40 Linux pluto[12105]: | RCOOKIE: 73 7a af 76 74 f0 39 8b Apr 27 21:09:40 Linux pluto[12105]: | peer: c0 a8 2b 3e Apr 27 21:09:40 Linux pluto[12105]: | state hash entry 25 Apr 27 21:09:40 Linux pluto[12105]: | state object not found Apr 27 21:09:40 Linux pluto[12105]: packet from 192.168.43.62:500: Informational Exchange is for an unknown (expired?) SA Apr 27 21:09:40 Linux pluto[12105]: | next event EVENT_RETRANSMIT in 10 seconds for #9 Can anyone please provide update on this issue. Why the VPN connection gets timed out and why the ISAKMP exchanges are not proper between Android and strongswan server.

    Read the article

  • Windows service runs file locally but not on server

    - by Ben
    I created a simple Windows service in dot net which runs a file. When I run the service locally I see the file running in the task manager just fine. However, when I run the service on the server it won't run the file. I've checked the path to the file which is fine. I also checked the permissions on the folder and file, and they fine as well. Also there are no exceptions happening. Below is the code used to launch the process which runs the file. I posted this first on stack overflow, and some people were thinking this is a config issue, so I moved it here. Any ideas? try { // TODO: Add code here to start your service. eventLog1.WriteEntry("VirtualCameraService started"); // Create An instance of the Process class responsible for starting the newly process. System.Diagnostics.Process process1 = new System.Diagnostics.Process(); // Set the directory where the file resides process1.StartInfo.WorkingDirectory = "C:\\VirtualCameraServiceSetup\\"; // Set the filename name of the file to be opened process1.StartInfo.FileName = "VirtualCameraServiceProject.avc"; // Start the process process1.Start(); } catch (Exception ex) { eventLog1.WriteEntry("VirtualCameraService exception - " + ex.InnerException); }

    Read the article

< Previous Page | 661 662 663 664 665 666 667 668 669 670 671 672  | Next Page >