Search Results

Search found 7592 results on 304 pages for 'x dev'.

Page 92/304 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Translate parse_git_branch function to zsh from bash (for prompt)

    - by yar
    I am using this function in Bash function parse_git_branch { git_status="$(git status 2> /dev/null)" pattern="^# On branch ([^${IFS}]*)" if [[ ! ${git_status}} =~ "working directory clean" ]]; then state="*" fi # add an else if or two here if you want to get more specific if [[ ${git_status} =~ ${pattern} ]]; then branch=${BASH_REMATCH[1]} echo "(${branch}${state})" fi } but I'm determined to use zsh. While I can use this perfectly as a shell script (even without a shebang) in my .zshrc the error is a parse error on this line if [[ ! ${git_status}}... What do I need to do to get it ready for zshell? Edit: The "actual error" I'm getting is " parse error near } and it refers to the line with the strange double }}, which works on Bash. Edit: Here's the final code, just for fun: parse_git_branch() { git_status="$(git status 2> /dev/null)" pattern="^# On branch ([^[:space:]]*)" if [[ ! ${git_status} =~ "working directory clean" ]]; then state="*" fi if [[ ${git_status} =~ ${pattern} ]]; then branch=${match[1]} echo "(${branch}${state})" fi } setopt PROMPT_SUBST PROMPT='$PR_GREEN%n@$PR_GREEN%m%u$PR_NO_COLOR:$PR_BLUE%2c$PR_NO_COLOR%(!.#.$)' RPROMPT='$PR_GREEN$(parse_git_branch)$PR_NO_COLOR' Thanks to everybody for your patience and help. Edit: The best answer has schooled us all: git status is porcelain (UI). Good scripting goes against GIT plumbing. Here's the final function: parse_git_branch() { in_wd="$(git rev-parse --is-inside-work-tree 2>/dev/null)" || return test "$in_wd" = true || return state='' git diff-index HEAD --quiet 2>/dev/null || state='*' branch="$(git symbolic-ref HEAD 2>/dev/null)" test -z "$branch" && branch='<detached-HEAD>' echo "(${branch#refs/heads/}${state})" } PROMPT='$PR_GREEN%n@$PR_GREEN%m%u$PR_NO_COLOR:$PR_BLUE%2c$PR_NO_COLOR%(!.#.$)' RPROMPT='$PR_GREEN$(parse_git_branch)$PR_NO_COLOR' Note that only the prompt is zsh-specific. In Bash it would be your prompt plus "\$(parse_git_branch)". This might be slower (more calls to GIT, but that's an empirical question) but it won't be broken by changes in GIT (they don't change the plumbing). And that is very important for a good script moving forward. Days Later: Ugh, it turns out that diff-index HEAD is NOT the same as checking status against working directory clean. So will this mean another plumbing call? I surely don't have time/expertise to write my own porcelain....

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • RHEL - NFS4: Mounted/Exported as rw, user write permission denied

    - by brendanmac
    Hello, I have nfs4 configured between a RHEL 5.3 server (charlie) and a RHEL 5.4 client (simcom1). The machines are configured to authenticate users via kerberos by a Windows Server 2008 active directory machine called "alpha." Alpha also serves as a dns and dhcp machine for the local network. I notice that when a user logs in to a RHEL machine for the first time they are issued a unique uid to that machine; The first user to log on gets 10001. So, what I see is that users between simcom1 and charlie have different UIDs. When a user does an 'ls -la' command from within an nfs4 mount I would have thought that the usernames in the owner column would indicate 'nobody' or at least the wrong user name - since UIDs are different between the machines for each user, and not all users have logged into each machine. However, the simcom1 is able to resolve usernames in an 'ls -la' executed on files residing on charlie via nfs4 correctly. Most troubling is that users are unable to write to files across the nfs mount. The server, charlie, has the root directory exported as rw. The client, simcom1, mounts the export as rw. My configurations are shown below. My question is, how do I configure the RHEL machines to allow users to write files across nfs4 that is already mounted as read/write? [root@charlie ~]# more /etc/exports / 10.100.0.0/16(rw,no_root_squash,fsid=0) [root@charlie ~]#cat /etc/sysconfig/nfs # # Define which protocol versions mountd # will advertise. The values are "no" or "yes" # with yes being the default #MOUNTD_NFS_V1="no" #MOUNTD_NFS_V2="no" #MOUNTD_NFS_V3="no" # # # Path to remote quota server. See rquotad(8) #RQUOTAD="/usr/sbin/rpc.rquotad" # Port rquotad should listen on. #RQUOTAD_PORT=875 # Optinal options passed to rquotad #RPCRQUOTADOPTS="" # # # TCP port rpc.lockd should listen on. #LOCKD_TCPPORT=32803 # UDP port rpc.lockd should listen on. #LOCKD_UDPPORT=32769 # # # Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) # Turn off v2 and v3 protocol support #RPCNFSDARGS="-N 2 -N 3" # Turn off v4 protocol support #RPCNFSDARGS="-N 4" # Number of nfs server processes to be started. # The default is 8. RPCNFSDCOUNT=8 # Stop the nfsd module from being pre-loaded #NFSD_MODULE="noload" # # # Optional arguments passed to rpc.mountd. See rpc.mountd(8) #STATDARG="" #RPCMOUNTDOPTS="" # Port rpc.mountd should listen on. #MOUNTD_PORT=892 # # # Optional arguments passed to rpc.statd. See rpc.statd(8) #RPCIDMAPDARGS="" # # Set to turn on Secure NFS mounts. SECURE_NFS="no" # Optional arguments passed to rpc.gssd. See rpc.gssd(8) #RPCGSSDARGS="-vvv" # Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8) #RPCSVCGSSDARGS="-vvv" # Don't load security modules in to the kernel #SECURE_NFS_MODS="noload" # # Don't load sunrpc module. #RPCMTAB="noload" # [root@simcom1 ~]# cat /etc/fstab --start snip-- charlie:/home /usr/local/dev/charlie nfs4 rw,nosuid, 0 0 --end snip-- [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# touch file touch: cannot touch 'file': Permission denied [brendanmac@simcom1 /usr/local/dev/charlie/brendanmac]# su Password: [root@simcom1 /usr/local/dev/charlie/brendanmac]# touch file [root@simcom1 /usr/local/dev/charlie/brendanmac]# ls -la file -rw------- 1 root root 0 May 26 10:43 file Thank you for your assistance, Brendan

    Read the article

  • Simple Cisco ASA 5505 config issue

    - by Ben Sebborn
    I have a Cisco ASA setup with two interfaces: inside: 192.168.2.254 / 255.255.255.0 SecLevel:100 outside: 192.168.3.250 / 255.255.255.0 SecLevel: 0 I have a static route setup to allow PCs on the inside network to access the internet via a gateway on the outside interface (3.254): outside 0.0.0.0 0.0.0.0 192.168.3.254 This all works fine. I now need to be able to access a PC on the outside interface (3.253) from a PC on the inside interface on port 35300. I understand I should be able to do this with no problems, as I'm going from a higher security level to a lower one. However I can't get any connection. Do I need to set up a seperate static route? Perhaps the route above is overriding what I need to be able to do (is it routing ALL traffic through the gateway?) Any advice on how to do this would be apprecaited. I am configuring this via ASDM but the config can be seen as below: Result of the command: "show running-config" : Saved : ASA Version 8.2(5) ! hostname ciscoasa domain-name xxx.internal names name 192.168.2.201 dev.xxx.internal description Internal Dev server name 192.168.2.200 Newserver ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 shutdown ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! interface Vlan1 nameif inside security-level 100 ip address 192.168.2.254 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 192.168.3.250 255.255.255.0 ! ! time-range Workingtime periodic weekdays 9:00 to 18:00 ! ftp mode passive clock timezone GMT/BST 0 clock summer-time GMT/BDT recurring last Sun Mar 1:00 last Sun Oct 2:00 dns domain-lookup inside dns server-group DefaultDNS name-server Newserver domain-name xxx.internal same-security-traffic permit inter-interface object-group service Mysql tcp port-object eq 3306 object-group protocol TCPUDP protocol-object udp protocol-object tcp access-list inside_access_in extended permit ip any any access-list outside_access_in remark ENABLES OUTSDIE ACCESS TO DEV SERVER! access-list outside_access_in extended permit tcp any interface outside eq www time-range Workingtime inactive access-list outside_access_in extended permit tcp host www-1.xxx.com interface outside eq ssh access-list inside_access_in_1 extended permit tcp any any eq www access-list inside_access_in_1 extended permit tcp any any eq https access-list inside_access_in_1 remark Connect to SSH services access-list inside_access_in_1 extended permit tcp any any eq ssh access-list inside_access_in_1 remark Connect to mysql server access-list inside_access_in_1 extended permit tcp any host mysql.xxx.com object-group Mysql access-list inside_access_in_1 extended permit tcp any host mysql.xxx.com eq 3312 access-list inside_access_in_1 extended permit object-group TCPUDP host Newserver any eq domain access-list inside_access_in_1 extended permit icmp any any access-list inside_access_in_1 remark Draytek Admin access-list inside_access_in_1 extended permit tcp any 192.168.3.0 255.255.255.0 eq 4433 access-list inside_access_in_1 remark Phone System access-list inside_access_in_1 extended permit tcp any 192.168.3.0 255.255.255.0 eq 35300 log disable pager lines 24 logging enable logging asdm warnings logging from-address [email protected] logging recipient-address [email protected] level errors mtu inside 1500 mtu outside 1500 ip verify reverse-path interface inside ip verify reverse-path interface outside ipv6 access-list inside_access_ipv6_in permit tcp any any eq www ipv6 access-list inside_access_ipv6_in permit tcp any any eq https ipv6 access-list inside_access_ipv6_in permit tcp any any eq ssh ipv6 access-list inside_access_ipv6_in permit icmp6 any any icmp unreachable rate-limit 1 burst-size 1 icmp permit any outside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface www dev.xxx.internal www netmask 255.255.255.255 static (inside,outside) tcp interface ssh dev.xxx.internal ssh netmask 255.255.255.255 access-group inside_access_in in interface inside control-plane access-group inside_access_in_1 in interface inside access-group inside_access_ipv6_in in interface inside access-group outside_access_in in interface outside route outside 0.0.0.0 0.0.0.0 192.168.3.254 10 route outside 192.168.3.252 255.255.255.255 192.168.3.252 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication telnet console LOCAL aaa authentication enable console LOCAL

    Read the article

  • vconfig created virtual interface and trunking - is the the interface untagged or tagged for that VLAN ID?

    - by kce
    I am trying to setup an additional VLAN on our Debian-based router/firewall (which exists as a virtual machine on Hyper-V), our core switch (an HP Procurve 5406) and a remote HP ProCurve 2610 that is connected via a WAN Transparent Lan Service (TLS) link. Let's work backwards from the network edge: The Debian server has an external connection attached to eth0. The internal interface is eth1, which is connected directly from our Hyper-V host to the 5406. The port that eth1 is attached to is setup as Trk12. The 2610 is attached to Trk9 (which trunks a whole slew of VLANs - Trk9 is our TLS head). I can successfully ping the management IP addresses for my VLAN from both switches but I cannot ping, from either switch, the virtual interface for my new VLAN on the Debian-base router and firewall. The existing VLAN works fine. What gives? The port eth1 is attached to is a trunk, the existing VLAN (ID 98) is untagged on the trunk, the new VLAN (ID 198) is tagged. VLAN 198 is tagged on Trk9 on the 5406 and on the 2610. I can ping the other switch's management IP (10.100.198.2 and 10.100.198.3) from the other respective switch. That leg of the VLAN works - however I cannot communicate with eth1.198's 10.100.198.1. I feel like I'm missing something elementary but what it is remains illusive to me. I suspect the issue is with the vconfig created eth1.198. It should pass the tagged VLAN 198 packets correct? But they cannot seem to get any further than the 5406. Communication on the existing VLAN 98 works fine. From the Debian box: eth1: eth1 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.0.1 Bcast:10.100.255.255 Mask:255.255.0.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12179786 errors:0 dropped:0 overruns:0 frame:0 TX packets:20210532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1586498028 (1.4 GiB) TX bytes:26154226278 (24.3 GiB) Interrupt:9 Base address:0xec00 eth1.198: eth1.198 Link encap:Ethernet HWaddr 00:15:5d:34:5e:03 inet addr:10.100.198.1 Bcast:10.100.198.255 Mask:255.255.255.0 inet6 addr: fe80::215:5dff:fe34:5e03/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1496 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:72 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:3528 (3.4 KiB) # cat /proc/net/vlan/eth1.198: eth1.198 VID: 198 REORDER_HDR: 0 dev->priv_flags: 1 total frames received 0 total bytes received 0 Broadcast/Multicast Rcvd 0 total frames transmitted 72 total bytes transmitted 3528 total headroom inc 0 total encap on xmit 39 Device: eth1 INGRESS priority mappings: 0:0 1:0 2:0 3:0 4:0 5:0 6:0 7:0 EGRESS priority mappings: # ip route 10.100.198.0/24 dev eth1.198 proto kernel scope link src 10.100.198.1 206.174.64.0/20 dev eth0 proto kernel scope link src 206.174.66.14 10.100.0.0/16 dev eth1 proto kernel scope link src 10.100.0.1 default via 206.174.64.1 dev eth0 # iptables -L -v Chain INPUT (policy DROP 6875 packets, 637K bytes) pkts bytes target prot opt in out source destination 41 4320 ACCEPT all -- lo any anywhere anywhere 11481 1560K ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 107 8058 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- eth1 any 10.100.0.0/24 anywhere tcp dpt:ssh 701 317K ACCEPT udp -- eth1 any anywhere anywhere udp dpts:bootps:bootpc Chain FORWARD (policy DROP 1 packets, 40 bytes) pkts bytes target prot opt in out source destination 156K 25M ACCEPT all -- eth1 any anywhere anywhere 215K 248M ACCEPT all -- eth0 eth1 anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1.198 any anywhere anywhere 0 0 ACCEPT all -- eth0 eth1.198 anywhere anywhere state RELATED,ESTABLISHED Chain OUTPUT (policy ACCEPT 13048 packets, 1640K bytes) pkts bytes target prot opt in out source destination From the 5406: # show vlan ports trk12 detail Status and Counters - VLAN Information - for ports Trk12 VLAN ID Name | Status Voice Jumbo Mode ------- -------------------- + ---------- ----- ----- -------- 98 WIFI | Port-based No No Untagged 198 VLAN198 | Port-based No No Tagged

    Read the article

  • Rsyslog is not working properly, it does not log anything

    - by Victor Henriquez
    I'm running a Debian server and a couple of days ago my rsyslog started to behave very weird, the daemon is running but it doesn't seem to do anything. Many people use the system but I'm the only one with (legal) root access. I'm using the default rsyslogd configuration (if you think is relevant I'll attach it, but it's the one that comes with the package). After I rotated all the log files, they have remained empty: # ls -l /var/log/*.log -rw-r--r-- 1 root root 0 Jun 27 00:25 /var/log/alternatives.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/auth.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/daemon.log -rw-r--r-- 1 root root 0 Jun 27 00:25 /var/log/dpkg.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/kern.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/lpr.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/mail.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/user.log Any try to force a log writing does not have any effect: # logger hey # ls -l /var/log/messages -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/messages Lsof shows that rsyslogd does not have any log files opened: # lsof -p 1855 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rsyslogd 1855 root cwd DIR 202,0 4096 2 / rsyslogd 1855 root rtd DIR 202,0 4096 2 / rsyslogd 1855 root txt REG 202,0 342076 21649 /usr/sbin/rsyslogd rsyslogd 1855 root mem REG 202,0 38556 32153 /lib/i386-linux-gnu/i686/cmov/libnss_nis-2.13.so rsyslogd 1855 root mem REG 202,0 79728 32165 /lib/i386-linux-gnu/i686/cmov/libnsl-2.13.so rsyslogd 1855 root mem REG 202,0 26456 32163 /lib/i386-linux-gnu/i686/cmov/libnss_compat-2.13.so rsyslogd 1855 root mem REG 202,0 297500 1061058 /usr/lib/rsyslog/imuxsock.so rsyslogd 1855 root mem REG 202,0 42628 32170 /lib/i386-linux-gnu/i686/cmov/libnss_files-2.13.so rsyslogd 1855 root mem REG 202,0 22784 1061106 /usr/lib/rsyslog/imklog.so rsyslogd 1855 root mem REG 202,0 1401000 32169 /lib/i386-linux-gnu/i686/cmov/libc-2.13.so rsyslogd 1855 root mem REG 202,0 30684 32175 /lib/i386-linux-gnu/i686/cmov/librt-2.13.so rsyslogd 1855 root mem REG 202,0 9844 32157 /lib/i386-linux-gnu/i686/cmov/libdl-2.13.so rsyslogd 1855 root mem REG 202,0 117009 32154 /lib/i386-linux-gnu/i686/cmov/libpthread-2.13.so rsyslogd 1855 root mem REG 202,0 79980 17746 /usr/lib/libz.so.1.2.3.4 rsyslogd 1855 root mem REG 202,0 18836 1061094 /usr/lib/rsyslog/lmnet.so rsyslogd 1855 root mem REG 202,0 117960 31845 /lib/i386-linux-gnu/ld-2.13.so rsyslogd 1855 root 0u unix 0xebe8e800 0t0 640 /dev/log rsyslogd 1855 root 3u FIFO 0,5 0t0 2474 /dev/xconsole rsyslogd 1855 root 4u unix 0xebe8e400 0t0 645 /var/spool/postfix/dev/log rsyslogd 1855 root 5r REG 0,3 0 4026532176 /proc/kmsg I was so frustrated that even reinstall the rsyslog package, but it still refuses to log anything: # apt-get remove --purge rsyslog # apt-get install rsyslog I thought someone had hacked the system, so run rkhunter, chkrootkit, unhide in an attempt to find hide processes / ports and nmap in a remote host to compare with the ports shown by netstat. And I know this doesn't mean anything, but all looks ok. The system also have an iptables firewall that is very restrictive with incoming / outgoing connections. This is driving me crazy, any idea what is going on here? [EDIT - disk space info] # df -h Filesystem Size Used Avail Use% Mounted on rootfs 24G 22G 629M 98% / /dev/root 24G 22G 629M 98% / devtmpfs 10M 112K 9.9M 2% /dev tmpfs 76M 48K 76M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 151M 40K 151M 1% /tmp tmpfs 151M 0 151M 0% /run/shm

    Read the article

  • Problem installing build-essential and upgrading g++ on Ubuntu 8.04

    - by ehsanul
    I'm having some trouble with dependencies it seems, but myself don't really know how to resolve the issue. Here's the output: ~:) sudo apt-get install build-essential Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential: Depends: g++ (>= 4:4.3.1) but 4:4.2.3-1ubuntu6 is to be installed E: Broken packages ~:) sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: g++-4.3 (>= 4.3.1-1) but it is not going to be installed Depends: gcc-4.3 (>= 4.3.1-1) but it is not installable E: Broken packages ~:) Edit: I just tried aptitude instead of apt-get, as suggested. Doesn't work, had other problems: ~:) sudo aptitude install build-essential [sudo] password for ehsanul: Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Building tag database... Done The following packages are BROKEN: g++ g++-4.3 libstdc++6-4.3-dev The following packages have been automatically kept back: dpkg-dev fakeroot libdns35 libisc35 linux-libc-dev patch The following NEW packages will be automatically installed: libgmp3c2 libmpfr1ldbl The following packages have been kept back: adobe-flashplugin bind9-host dnsutils gvfs gvfs-backends gvfs-fuse libatm1 libbind9-30 libgvfscommon0 libisccc30 libisccfg30 liblwres30 libnautilus-extension1 linux-headers-2.6.24-24 linux-headers-2.6.24-24-generic linux-image-2.6.24-24-generic nautilus nautilus-data The following NEW packages will be installed: libgmp3c2 libmpfr1ldbl The following packages will be upgraded: build-essential The following partially installed packages will be configured: timidity 2 packages upgraded, 4 newly installed, 0 to remove and 24 not upgraded. Need to get 775kB/6265kB of archives. After unpacking 20.3MB will be used. The following packages have unmet dependencies: libstdc++6-4.3-dev: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libstdc++6 (>= 4.3.2-1ubuntu11) but 4.2.4-1ubuntu4 is installed. g++-4.3: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: gcc-4.3 (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libc6 (>= 2.8~20080505) but 2.7-10ubuntu4 is installed. g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc-4.3 (>= 4.3.1-1) which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Keep the following packages at their current version: build-essential [11.3ubuntu1 (hardy, now)] g++ [4:4.2.3-1ubuntu6 (hardy-updates, now)] g++-4.3 [Not Installed] libstdc++6-4.3-dev [Not Installed] Score is -9852 Accept this solution? [Y/n/q/?]

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • Converting an Oracle VM VirtualBox VM into an Oracle VM Server image

    - by wim.coekaerts
    As we are working on tighter seemless moving of VM's between the 2 products, here are a few simple steps to convert an existing Oracle VM VirtualBox image over. Steps involved to make it easy/straightforward : (1) When creating a VM in Virtualbox, using Oracle Linux as an example, make sure that /etc/fstab only uses labels. Do not use hardcoded device names. instead of an entry /dev/sda1 /u01 ext3 defaults 1 1 use LABEL=foo /u01 ext3 defaults 1 1 for more info on labels : man e2label or use a logical volume /dev/VolGroup00/LVfoo /u01 ext3 defaults 1 1 Doing so will make it easier to have an OS boot up on a different hypervisor with potentially different device names. For instance, the VirtualBox VM might expose a scsi driver while in Oracle VM Server you might end up with an ide disk, this then changes /dev/sda to /dev/hda. (2) If you have a VM created that you want to convert, then shut down the VM in VirtualBox and convert the image files : go the the directory that contains your HardDisk image files (.VirtualBox/HardDisks/* as an example) for each of the virtual disks run the following command : VBoxManage clonehd virtualdiskfilename.vdi system.img --format raw where virtualdiskfilename.vdi is the original VBox VM file (this can also be a vmdk file) and system.img is the name of the virtualdisk for Oracle VM. this can be any filename as well, I typically use system.img to specify the boot disk (as is common for Oracle VM template creation) (3) create a vm.cfg To run a VM converted from VirtualBox, you have to create a vm.cfg for Oracle VM server that creates an HVM guest. The easiest is to use a simple hvm vm.cfg and change it for your vm. I have an example here : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' disk = ['file:system.img,hda,w', 'file:oracle.img,hdb,w',',hdc:cdrom,r',] kernel = '/usr/lib/xen/boot/hvmloader' memory = '1024' name = 'vmname' on_crash = 'restart' on_reboot = 'restart' pae = 1 serial = 'pty' timer_mode = '0' usbdevice = 'tablet' vcpus = 1 vif = ['bridge=xenbr0,type=ioemu'] vif_other_config = [] vnc = 1 vncconsole = 1 vnclisten = '0.0.0.0' vncpasswd = '' vncunused = 1 If you take the above vm.cfg, all you need to do - modify disk = (add your virtual disks in there) - modify memory = (amount of memory your VM needs) - modify name = (enter a name for your VM here) - modify vif = (might want to replace bridge=xenbr0 to the bridge you want to use) if you want more than 1 vcpu or other changes of course you have to make those as well. (4) copy this set of files onto your Oracle VM server or onto a webserver in a subdirectory and import the template through Oracle VM Manager. You can also just start the vm using xm create vm.cfg if you like. And that's it. As I said, we are working on automation around all this but it is relatively trivial to convert VM's over as long as you take the basic issues into account. Primarily the set up of the filesystems and the use of labels in /etc/fstab. There are other potential things to look at, such as network config. If you want to make that part clean then prior to shutting down the VM change /etc/modprobe.conf and/or add the mac address of the VM into the vm.cfg in the vifs line. The good thing, at least with Linux, is that even tho the virtual hardware changes, Linux will deal with it just fine (e1000 vs 8139 realtek, ide vs scsi etc). hope this helps.

    Read the article

  • ASP.NET Routing not working on IIS 7.0

    - by Rick Strahl
    I ran into a nasty little problem today when deploying an application using ASP.NET 4.0 Routing to my live server. The application and its Routing were working just fine on my dev machine (Windows 7 and IIS 7.5), but when I deployed (Windows 2008 R1 and IIS 7.0) Routing would just not work. Every time I hit a routed url IIS would just throw up a 404 error: This is an IIS error, not an ASP.NET error so this doesn’t actually come from ASP.NET’s routing engine but from IIS’s handling of expressionless URLs. Note that it’s clearly falling through all the way to the StaticFile handler which is the last handler to fire in the typical IIS handler list. In other words IIS is trying to parse the extension less URL and not firing it into ASP.NET but failing. As I mentioned on my local machine this all worked fine and to make sure local and live setups match I re-copied my Web.config, double checked handler mappings in IIS and re-copied the actual application assemblies to the server. It all looked exactly matched. However no workey on the server with IIS 7.0!!! Finally, totally by chance, I remembered the runAllManagedModulesForAllRequests attribute flag on the modules key in web.config and set it to true: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="ScriptCompressionModule" type="Westwind.Web.ScriptCompressionModule,Westwind.Web" /> </modules> </system.webServer> And lo and behold, Routing started working on the live server and IIS 7.0! This seems really obvious now of course, but the really tricky thing about this is that on IIS 7.5 this key is not necessary. So on my Windows 7 machine ASP.NET Routing was working just fine without the key set. However on IIS 7.0 on my live server the same missing setting was not working. On IIS 7.0 this key must be present or Routing will not work. Oddly on IIS 7.5 it appears that you can’t even turn off the behavior – setting runtAllManagedModuleForAllRequests="false" had no effect at all and Routing continued to work just fine even with the flag set to false, which is NOT what I would have expected. Kind of disappointing too that Windows Server 2008 (R1) can’t be upgraded to IIS 7.5. It sure seems like that should have been possible since the OS server core changes in R2 are pretty minor. For the future I really hope Microsoft will allow updating IIS versions without tying them explicitly to the OS. It looks like that with the release of IIS Express Microsoft has taken some steps to untie some of those tight OS links from IIS. Let’s hope that’s the case for the future – it sure is nice to run the same IIS version on dev and live boxes, but upgrading live servers is too big a deal to do just because an updated OS release came out. Moral of the story – never assume that your dev setup will work as is on the live setup. It took me forever to figure this out because I assumed that because my web.config on the local machine was fine and working and I copied all relevant web.config data to the server it can’t be the configuration settings. I was looking everywhere but in the .config file forever before getting desperate and remembering the flag when I accidentally checked the intellisense settings in the modules key. Never assume anything. The other moral is: Try to keep your dev machine and server OS’s in sync whenever possible. Maybe it’s time to upgrade to Windows Server 2008 R2 after all. More info on Extensionless URLs in IIS Want to find out more exactly on how extensionless Urls work on IIS 7? The check out  How ASP.NET MVC Routing Works and its Impact on the Performance of Static Requests which goes into great detail on the complexities of the process. Thanks to Jeff Graves for pointing me at this article – a great linked reference for this topic!© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Converting a PV vm back into an HVM vm

    - by wim.coekaerts
    I have been doing some Oracle VM benchmark stuff in the last week or 2 in my off hours and yesterday I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel. It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps. A PV kernel uses pygrub and a paravirt kernel image that lives on the vm image virtual disk. since this disk image does not have to be bootable it doesn't contain a boot sector and if you just restart the VM in hvm mode the virtual bios will just not do much as it can't start the boot process from disk The first thing I do is make a backup of my vm.cfg file :-) and then edit it as follows : the original file contains : bootloader = '/usr/bin/pygrub' I replace that with : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' kernel = '/usr/lib/xen/boot/hvmloader' then changing the disk files. I change my xvd disks to hd disks and I copy over the iso image of my instal lDVD. In the case of my VM template it was based on OL5U4 So I downloaded Enterprise-R5-U4-Server-x86_64-dvd.iso and added it as a cd device. disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,xvda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,xvdb,w', ] to disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,hda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,hdb,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Enterprise-R5-U4-Server-x86_64-dvd.iso, hdc:cdrom,r', ] boot='d' for the network devices (vifs) I change : vif = ['bridge=xenbr2,type=netfront'] to vif = ['bridge=xenbr2,type=ioemu'] That should do it. Next, inside the VM, I copy over the regular kernel rpm that I want to end up running in hvm mode. In this example case it was : kernel-2.6.18-164.0.0.0.1.el5.x8664.rpm. I will use that later on in the process. I put this kernel simply in /root At this point I just start the vm with xm create vm.cfg and start my vnc console to the vm console. Oracle Linux will boot from the iso image, I just go through the install steps and click on UPgrade existing (not re-install). Because the VM is the same as the ISO the install won't actually do anything and it will run through instantly. When the "Reboot" button pops up, don't reboot. Switch to the command prompt console. hi alt-f2 to go to the shell prompt. Now it's easy : umount /mnt/sysimage/boot cd /mnt/sysimage chroot . mount /dev/hda1 (if that was your /boot partition) export PATH=/sbin:$PATH (just to clean that up) edit /etc/modprobe.conf and comment out the xen modules (just put a # in front) Install grub. if your /boot is hda1 then that is (hd0,0) $ grub root (hd0,0) setup (hd0) exit grub now you have a good bootsector, grub installed and you have your grub.conf file Install the new kernel cd root (this is your old /root in your pv image) rpm -ivh remove (or comment out) boot='d' in your vm.cfg restart the VM and you should be good to go, regular grub should start and load your environment. Caveats : this assumes you used labels for your filesystems. if /etc/fstab were to have devices listed then you would have to rename these device before rebooting as well. If you had a /dev/xvda disk then this would be /dev/hda or /dev/sda. All in all it is a relatively short and simple process.

    Read the article

  • Converting a PV vm back into an HVM vm

    - by wim.coekaerts
    I have been doing some Oracle VM benchmark stuff in the last week or 2 in my off hours and yesterday I wanted to convert one of my VMs that was based on a paravirt kernel into a vm that just boots as a regular hardware virt VM with a standard x86-64 kernel. It took me a little while to figure out the fastest way so now that I have it pretty much down I wanted to share the steps. A PV kernel uses pygrub and a paravirt kernel image that lives on the vm image virtual disk. since this disk image does not have to be bootable it doesn't contain a boot sector and if you just restart the VM in hvm mode the virtual bios will just not do much as it can't start the boot process from disk The first thing I do is make a backup of my vm.cfg file :-) and then edit it as follows : the original file contains : bootloader = '/usr/bin/pygrub' I replace that with : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' kernel = '/usr/lib/xen/boot/hvmloader' then changing the disk files. I change my xvd disks to hd disks and I copy over the iso image of my instal lDVD. In the case of my VM template it was based on OL5U4 So I downloaded Enterprise-R5-U4-Server-x86_64-dvd.iso and added it as a cd device. disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,xvda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,xvdb,w', ] to disk = ['file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/System.img,hda,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Oracle11202RAC_x86_64-xvdb.img,hdb,w', 'file:/ovs/OVM_EL5U4_X86_64_11202RAC_PVM/Enterprise-R5-U4-Server-x86_64-dvd.iso, hdc:cdrom,r', ] boot='d' for the network devices (vifs) I change : vif = ['bridge=xenbr2,type=netfront'] to vif = ['bridge=xenbr2,type=ioemu'] That should do it. Next, inside the VM, I copy over the regular kernel rpm that I want to end up running in hvm mode. In this example case it was : kernel-2.6.18-164.0.0.0.1.el5.x8664.rpm. I will use that later on in the process. I put this kernel simply in /root At this point I just start the vm with xm create vm.cfg and start my vnc console to the vm console. Oracle Linux will boot from the iso image, I just go through the install steps and click on UPgrade existing (not re-install). Because the VM is the same as the ISO the install won't actually do anything and it will run through instantly. When the "Reboot" button pops up, don't reboot. Switch to the command prompt console. hi alt-f2 to go to the shell prompt. Now it's easy : umount /mnt/sysimage/boot cd /mnt/sysimage chroot . mount /dev/hda1 (if that was your /boot partition) export PATH=/sbin:$PATH (just to clean that up) edit /etc/modprobe.conf and comment out the xen modules (just put a # in front) Install grub. if your /boot is hda1 then that is (hd0,0) $ grub root (hd0,0) setup (hd0) exit grub now you have a good bootsector, grub installed and you have your grub.conf file Install the new kernel cd root (this is your old /root in your pv image) rpm -ivh remove (or comment out) boot='d' in your vm.cfg restart the VM and you should be good to go, regular grub should start and load your environment. Caveats : this assumes you used labels for your filesystems. if /etc/fstab were to have devices listed then you would have to rename these device before rebooting as well. If you had a /dev/xvda disk then this would be /dev/hda or /dev/sda. All in all it is a relatively short and simple process.

    Read the article

  • ?Exadata??????DBFS

    - by Liu Maclean(???)
    ?Exadata???DBFS ??????? 1. ??fuse RPM  [root@dm01db01 ~]# yum install fuse Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fuse.x86_64 0:2.7.4-8.0.1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================  Package                            Arch                                 Version                                         Repository                                Size ======================================================================================================================================================================== Installing:  fuse                               x86_64                               2.7.4-8.0.1.el5                                 el5_latest                                85 k Transaction Summary ======================================================================================================================================================================== Install       1 Package(s) Upgrade       0 Package(s) Total download size: 85 k Is this ok [y/N]: y Downloading Packages: fuse-2.7.4-8.0.1.el5.x86_64.rpm                                                                                                                  |  85 kB     00:00      Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : fuse                                                                                                                                             1/1  Installed:   fuse.x86_64 0:2.7.4-8.0.1.el5                                                                                                                                          [root@dm01db01 ~]# yum install fuse-libs Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fuse-libs.i386 0:2.7.4-8.0.1.el5 set to be updated ---> Package fuse-libs.x86_64 0:2.7.4-8.0.1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================  Package                                Arch                                Version                                       Repository                               Size ======================================================================================================================================================================== Installing:  fuse-libs                              i386                                2.7.4-8.0.1.el5                               el5_latest                               71 k  fuse-libs                              x86_64                              2.7.4-8.0.1.el5                               el5_latest                               70 k Transaction Summary ======================================================================================================================================================================== Install       2 Package(s) Upgrade       0 Package(s) Total download size: 141 k Is this ok [y/N]: y Downloading Packages: (1/2): fuse-libs-2.7.4-8.0.1.el5.x86_64.rpm                                                                                                      |  70 kB     00:00      (2/2): fuse-libs-2.7.4-8.0.1.el5.i386.rpm                                                                                                        |  71 kB     00:00      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total                                                                                                                                    71 kB/s | 141 kB     00:01      Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : fuse-libs                                                                                                                                        1/2    Installing     : fuse-libs                                                                                                                                        2/2  Installed:   fuse-libs.i386 0:2.7.4-8.0.1.el5                                                  fuse-libs.x86_64 0:2.7.4-8.0.1.el5                                                  Complete! [root@dm01db01 ~]# yum install fuse-devel Loaded plugins: rhnplugin, security This system is not registered with ULN. ULN support will be disabled. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package fuse-devel.i386 0:2.7.4-8.0.1.el5 set to be updated ---> Package fuse-devel.x86_64 0:2.7.4-8.0.1.el5 set to be updated --> Finished Dependency Resolution Dependencies Resolved ========================================================================================================================================================================  Package                                 Arch                                Version                                      Repository                               Size ======================================================================================================================================================================== Installing:  fuse-devel                              i386                                2.7.4-8.0.1.el5                              el5_latest                               28 k  fuse-devel                              x86_64                              2.7.4-8.0.1.el5                              el5_latest                               28 k Transaction Summary ======================================================================================================================================================================== Install       2 Package(s) Upgrade       0 Package(s) Total download size: 57 k Is this ok [y/N]: y Downloading Packages: (1/2): fuse-devel-2.7.4-8.0.1.el5.x86_64.rpm                                                                                                     |  28 kB     00:00      (2/2): fuse-devel-2.7.4-8.0.1.el5.i386.rpm                                                                                                       |  28 kB     00:00      ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total                                                                                                                                    21 kB/s |  57 kB     00:02      Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction   Installing     : fuse-devel                                                                                                                                       1/2    Installing     : fuse-devel                                                                                                                                       2/2  Installed:   fuse-devel.i386 0:2.7.4-8.0.1.el5                                                 fuse-devel.x86_64 0:2.7.4-8.0.1.el5                                                 Complete! 2. ?? DBFS??? ?????? cd $ORACLE_HOME/rdbms/admin sqlplus / as sysdba Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> @prvtfspi.plb Package body created. No errors. Package body created. No errors. ?????dbms_dbfs_sfs package  SQL> create tablespace dbfstbs datafile size 20g; Tablespace created. SQL> create user maclean_dbfs identified by oracle; User created. SQL> grant dba to maclean_dbfs; Grant succeeded. @@!!! SQL> grant  dbfs_role to maclean_dbfs; Grant succeeded. 3. ??DBFS SQL> conn maclean_dbfs/oracle Connected. SQL> @?/rdbms/admin/dbfs_create_filesystem.sql  dbfstbs mac_dbfs   No errors. -------- CREATE STORE: begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_MAC_DBFS', tbl_name => 'T_MAC_DBFS', tbl_tbs => 'dbfstbs', lob_tbs => 'dbfstbs', do_partition => false, partition_key => 1, do_compress => false, compression => '', do_dedup => false, do_encrypt => false); end; -------- REGISTER STORE: begin dbms_dbfs_content.registerStore(store_name=> 'FS_MAC_DBFS', provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end; -------- MOUNT STORE: begin dbms_dbfs_content.mountStore(store_name=>'FS_MAC_DBFS', store_mount=>'mac_dbfs'); end; -------- CHMOD STORE: declare m integer; begin m := dbms_fuse.fs_chmod('/mac_dbfs', 16895); end; No errors. 4.  ??mount point  [root@dm01db01 ~]# mkdir /dbfs [root@dm01db01 ~]# chown oracle:oinstall /dbfs 5. ??library path ?OS  # echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf 6. ?????? export ORACLE_HOME=/s01/orabase/product/11.2.0/dbhome_1 [root@dm01db01 ~]# ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 /usr/local/lib/libclntsh.so.11.1 [root@dm01db01 ~]#  ln -s $ORACLE_HOME/lib/libnnz11.so /usr/local/lib/libnnz11.so [root@dm01db01 ~]#  ln -s /lib64/libfuse.so.2 /usr/local/lib/libfuse.so.2 7. ??ldconfig  [root@dm01db01 ~]# ldconfig [root@dm01db01 ~]#  8. ??fusermount??????? [root@dm01db01 ~]#  chmod +x /usr/bin/fusermount [root@dm01db01 ~]#  ls -l /usr/bin/fusermount lrwxrwxrwx 1 root root 15 Sep  7 03:06 /usr/bin/fusermount -> /bin/fusermount [root@dm01db01 ~]#  ls -l /bin/fusermount -rwsr-x--x 1 root fuse 27072 Oct 17  2011 /bin/fusermount 9. ???????OS  dbfs_client maclean_dbfs@dm01db01:1521/orcl  /dbfs 10. ????nohup + &?????mount DBFS,???????????? [oracle@dm01db01 ~]$ echo "oracle"  >> dbfs_pw [oracle@dm01db01 ~]$ nohup dbfs_client maclean_dbfs@dm01db01:1521/orcl /dbfs < dbfs_pw & [oracle@dm01db01 ~]$ df -h Filesystem            Size  Used Avail Use% Mounted on /dev/mapper/VGExaDb-LVDbSys1                        30G   15G   14G  53% / /dev/sda1             502M   30M  447M   7% /boot /dev/mapper/VGExaDb-LVDbOra1                        99G   20G   75G  21% /u01 tmpfs                  81G     0   81G   0% /dev/shm dbfs-maclean_dbfs@orcl:/                        20G  120K   20G   1% /dbfs [oracle@dm01db01 ~]$ mount /dev/mapper/VGExaDb-LVDbSys1 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw,nodev) /dev/mapper/VGExaDb-LVDbOra1 on /u01 type ext3 (rw,nodev) tmpfs on /dev/shm type tmpfs (rw,size=82052m) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) dbfs-maclean_dbfs@orcl:/ on /dbfs type fuse (rw,nosuid,nodev,max_read=1048576,default_permissions,user=oracle) [oracle@dm01db01 ~]$ ls -l /dbfs/ total 0 drwxrwxrwx 3 root root 0 Sep 14 05:11 mac_dbfs [oracle@nas ~]$ dbfs_client  --------MOUNT mode: usage: dbfs_client <db_user>@<db_server> [options] <mountpoint>   db_user:              Name of Database user that owns DBFS content repository filesystem(s)   db_server:            A valid connect string for Oracle database server                         (for example, hrdb_host:1521/hrservice)   mountpoint:           Path to mount Database File System(s)                         All the file systems owned by the database user will be seen at the mountpoint. DBFS options:   -o direct_io          Bypass the Linux page cache. Gives much better performance for large files.                         Programs in the file system cannot be executed with this option.                         This option is recommended when DBFS is used as an ETL staging area.   -o wallet             Run dbfs_client in background.                         Wallet must be configured to get credentials.   -o failover           dbfs_client fails over to surviving database instance with no data loss.                         Some performance cost on writes, especially for small files.   -o allow_root         Allows root access to the filesystem.                         This option requires setting 'user_allow_other' parameter in '/etc/fuse.conf'.   -o allow_other        Allows other users access to the file system.                         This option requires setting 'user_allow_other' parameter in '/etc/fuse.conf'.   -o rw                 Mount the filesystem read-write. [Default]   -o ro                 Mount the filesystem read-only. Files cannot be modified.   -o trace_file=STR     Tracing <filename> | 'syslog'   -o trace_level=N      Trace Level: 1->DEBUG, 2->INFO, 3->WARNING, 4->ERROR, 5->CRITICAL [Default: 4]   -h                    help   -V                    version --------COMMAND mode: Usage:     dbfs_client <db_user>@<db_server> --command command [switches] [arguments]             command:          Command to be executed, e.g., ls, cp, mkdir, rm            switches:         Switches are described below for each command.            arguments:        File names or directory names NOTE:      All database pathnames must be absolute and preceded by dbfs:/ Commands   ls            dbfs_client <db_user>@<db_server> --command ls [switches] target      Switches:              -a         Show all files including those starting with '.'            -l         Use a long listing format. In addition to the name of each file                       print the file type, permissions, size, user and group information            -R         List subdirectories recursively cp                     dbfs_client <db_user>@<db_server> --command cp [switches] source destination      Switches:              -r, -R      Copy a directory and its contents recursively into the destination directory rm                     dbfs_client <db_user>@<db_server> --command rm [switches] target      Switches:              -r, -R      Removes a directory and its contents recursively mkdir                  dbfs_client <db_user>@<db_server> --command mkdir directory_name Examples                     dbfs_client ETLUser@DBConnectString --command ls -l -a dbfs:/staging_area/directory1            dbfs_client ETLUser@DBConnectString --command cp -R  /tmp/1-Jan-2009-dump dbfs:/staging_area            dbfs_client ETLUser@DBConnectString --command rm dbfs:/staging_area/hello.txt            dbfs_client ETLUser@DBConnectString --command mkdir dbfs:/staging_area/directory2 [oracle@dm01db01 ~]$ ls -lh /tmp/largefile -rw-r--r-- 1 oracle oinstall 2.0G Sep 14 08:50 /tmp/largefile [oracle@dm01db01 ~]$ time dbfs_client  maclean_dbfs@dm01db01:1521/orcl --command cp /tmp/largefile dbfs:/mac_dbfs Password: /tmp/largefile -> dbfs:/mac_dbfs/largefile real    0m11.802s user    0m0.580s sys     0m2.375s ?Exadata?????2G?????? DBFS???11s => 200MB/s 

    Read the article

  • Development Environment in a VM against an isolated development/test network

    - by bart
    I currently work in an organization that forces all software development to be done inside a VM. This is for a variety of risk/governance/security/compliance reasons. The standard setup is something like: VMWare image given to devs with tools installed VM is customized to suit project/stream needs VM sits in a network & domain that is isolated from the live/production network SCM connectivity is only possible through dev/test network Email and office tools need to be on live network so this means having two separate desktops going at once Heavyweight dev tools in use on VMs so they are very resource hungry Some problems that people complain about are: Development environment runs slower than normal (host OS is windows XP so memory is limited) Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective. Mouse in particular doesn't seem to work properly using VMWare player or RDP. Need a separate login to Dev/Test network/domain Has anyone seen or worked in other (hopefully better) setups to this that have similar constraints (as mentioned at the top)? In particular are there viable options that would remove the need for running stuff in a VM altogether?

    Read the article

  • Problem calling Request using RequestBuilder

    - by Tushar Ahirrao
    Hi My Code is String url = "http: gd.geobytes.com/gd?after=-1&variables=GeobytesCountry,GeobytesCity"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL .encode(url)); try { Request request = builder.sendRequest(null, new RequestCallback() { public void onError(Request request, Throwable exception) { Couldn't connect to server (could be timeout, SOP violation, etc.) } public void onResponseReceived(Request request, Response response) { System.out.println(response.getText() + "Response"); if (200 == response.getStatusCode()) { Window.alert(response.getText()); } else { Window.alert(response.getText()); } } }); } catch (RequestException e) { e.printStackTrace(); } i receive following error com.google.gwt.http.client.RequestPermissionException: The URL http://gd.geobytes.com/gd?after=-1&variables=GeobytesCountry,GeobytesCity is invalid or violates the same-origin security restriction at com.google.gwt.http.client.RequestBuilder.doSend(RequestBuilder.java:378) at com.google.gwt.http.client.RequestBuilder.sendRequest(RequestBuilder.java:254) at com.ip.client.IpAddressTest.onModuleLoad(IpAddressTest.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619) Caused by: com.google.gwt.http.client.RequestException: (NS_ERROR_DOM_BAD_URI): Access to restricted URI denied

    Read the article

  • Substitute values (for specific dates) from a second data frame to the first data frame

    - by user1665355
    I have two time series data frames: The first one: head(df1) : GMT MSCI ACWI DJGlbl Russell 1000 Russell Dev S&P GSCI Industrial S&P GSCI Precious 1999-03-01 -0.7000000 0.2000000 -0.1000000 -1.5000000 -1.0000000 -0.4000000 1999-03-02 -0.5035247 0.0998004 -0.7007007 -0.2030457 0.4040404 -0.3012048 1999-03-03 -0.2024291 0.2991027 0.0000000 -0.6103764 0.1006036 -0.1007049 1999-03-04 0.7099391 0.2982107 1.5120968 -0.1023541 0.5025126 0.4032258 1999-03-05 2.4169184 0.8919722 2.1847071 2.7663934 -1.2000000 0.0000000 1999-03-08 0.3933137 0.3929273 0.5830904 -0.0997009 -0.2024291 1.1044177 tail(df1) : GMT MSCI ACWI DJGlbl Russell 1000 Russell Dev S&P GSCI Industrial S&P GSCI Precious 2011-12-23 0.68241470 0.84790673 0.9441385 0.6116208 0.5822862 -0.2345300 2011-12-26 -0.05213764 0.00000000 0.0000000 0.0000000 0.0000000 0.0000000 2011-12-27 0.20865936 0.05254861 0.3117693 0.2431611 0.0000000 -0.7233273 2011-12-28 -0.62467465 -1.20798319 -1.1655012 -0.9702850 -2.0414381 -2.4043716 2011-12-29 0.52383447 0.47846890 0.8647799 0.5511329 -0.0933126 -1.2504666 2011-12-30 0.26055237 1.03174603 -0.4676539 1.2180268 1.9613948 1.7388017 The second one: head(df2) : GMT MSCI.ACWI DJGlbl Russell.1000 Russell.Dev S.P.GSCI.Industrial S.P.GSCI.Precious 1999-06-01 0.00000000 0.24438520 0.0000000 0 -0.88465521 0.008522842 1999-07-01 0.12630441 0.06755621 0.0000000 0 0.29394697 0.000000000 1999-08-02 0.07441812 0.18922829 0.0000000 0 0.02697299 -0.107155063 1999-09-01 -0.36952701 0.08684107 0.1117509 0 0.24520976 0.000000000 1999-10-01 0.00000000 0.00000000 0.0000000 0 0.00000000 1.941266205 1999-11-01 0.41879925 0.00000000 0.0000000 0 0.00000000 -0.197897901 tail(df2) : GMT MSCI.ACWI DJGlbl Russell.1000 Russell.Dev S.P.GSCI.Industrial S.P.GSCI.Precious 2011-07-01 0.00000000 0.0000000 0.0000000 0.0000000 0.00000000 -0.1141162 2011-08-01 0.00000000 0.0000000 0.0000000 0.0000000 0.02627347 0.0000000 2011-09-01 -0.02470873 0.2977585 -0.0911891 0.6367605 0.00000000 0.2830977 2011-10-03 0.42495188 0.0000000 0.4200743 -0.4420027 -0.41012646 0.0000000 2011-11-01 0.00000000 0.0000000 0.0000000 -0.6597739 0.00000000 0.0000000 2011-12-01 0.50273034 0.0000000 0.0000000 0.6476393 0.00000000 0.0000000 The first df cointains daily observations. The second df contains only the "first day of each month" forecasted values. I would like to substitute the values from the second df into the first one. In other words, the "first day of each month" values in the first df will be substituted for the "first day of each month" values from the second df. I tried to write an lapply loop that substitutes the values and was only trying to use match function. But I failed. I could not find the similar question at StackOverflow either... Greatful for any suggestions!

    Read the article

  • How to Select Items in Dropdown in Selenium

    - by Marcus Gladir
    Firstly, I have been trying to get the dropdown from this web page: http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl This is the code I have: import urllib2 from bs4 import BeautifulSoup import re from pprint import pprint import sys from selenium import common from selenium import webdriver import selenium.webdriver.support.ui as ui from boto.s3.key import Key import requests url = 'http://solutions.3m.com/wps/portal/3M/en_US/Interconnect/Home/Products/ProductCatalog/Catalog/?PC_Z7_RJH9U5230O73D0ISNF9B3C3SI1000000_nid=RFCNF5FK7WitWK7G49LP38glNZJXPCDXLDbl' element_xpath = '//*[@id="Component1"]' driver = webdriver.PhantomJS() driver.get(url) element = driver.find_element_by_xpath(element_xpath) element_xpath = '/option[@value="02"]' all_options = element.find_elements_by_tag_name("option") for option in all_options: print("Value is: %s" % option.get_attribute("value")) option.click() source = driver.page_source.encode('utf-8', 'ignore') driver.quit() source = str(source) soup = BeautifulSoup(source, 'html.parser') print soup What prints out is this: Traceback (most recent call last): File "../../../../test.py", line 58, in <module> Value is: XX main() File "../../../../test.py", line 46, in main option.click() File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 54, in click self._execute(Command.CLICK_ELEMENT) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webelement.py", line 228, in _execute return self._parent.execute(command, params) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/webdriver.py", line 165, in execute self.error_handler.check_response(response) File "/home/eric/dev/octocrawler-env/local/lib/python2.7/site-packages/selenium-2.33.0-py2.7.egg/selenium/webdriver/remote/errorhandler.py", line 158, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.ElementNotVisibleException: Message: u'{"errorMessage":"Element is not currently visible and may not be manipulated","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"81","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:51413","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\\"sessionId\\": \\"30e4fd50-f0e4-11e3-8685-6983e831d856\\", \\"id\\": \\":wdc:1402434863875\\"}","url":"/click","urlParsed":{"anchor":"","query":"","file":"click","directory":"/","path":"/click","relative":"/click","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/click","queryKey":{},"chunks":["click"]},"urlOriginal":"/session/30e4fd50-f0e4-11e3-8685-6983e831d856/element/%3Awdc%3A1402434863875/click"}}' ; Screenshot: available via screen And the weirdest most infuriating bit of it all is that sometimes it actually all works out. I have no clue what's going on here.

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

  • SMO ManagedComputer.ServiceInstances is empty

    - by Mark J Miller
    I am trying to use SMO (VS 2010, SQL Server 2008) to connect to SQL Server and view the server protocol configuration. I can connect and list the Services and ClientProtocols as well as the account MSSQLSERVER service is running under. However, the ServerInstances collection is empty. The only instance on the target server is the default (MSSQLSERVER), shouldn't that be in the collection? How can I get an instance of it so I can inspect the ServerProtocols collection? Here's the code I'm using: class Program { static void Main(string[] args) { //machine hosting installed sql server instance ManagedComputer host = new ManagedComputer("dev-it-db01.dev.interbankfx.lcl"); //ManagedComputer host = new ManagedComputer("MRW-IT-DTP69"); if (host.ServerInstances.Count != 0) { //why is this 0? Is it because only the DEFAULT instance exists? Console.WriteLine("/////////////// INSTANCES ////////////////"); foreach (ServerInstance inst in host.ServerInstances) { Console.WriteLine(inst.Name); } } Console.WriteLine("/////////////// SERVICES ////////////////"); // enumerate sql services (looking for MSSSQLSERVER) foreach (Service svc in host.Services) { Console.WriteLine(svc.Name); } Console.WriteLine("/////////////// DETAILS ////////////////"); // get name of MSSQLSERVER instance from user (pick from list above) Service mssqlserver = host.Services["MSSQLSERVER"]; // print service account: .\{account} == "local account", "LocalSystem", "NetworkService", {domain}\{account} == "domain account" Console.WriteLine("Service Account: {0}", mssqlserver.ServiceAccount); // get client protocols foreach (ClientProtocol cp in host.ClientProtocols) { Console.WriteLine("{0} {1} ({2})", cp.Order, cp.DisplayName, cp.IsEnabled ? "Enabled" : "Disabled"); } } } I've also tried: Urn u = new Urn("ManagedComputer[@Name=dev-it-db01.dev.interbankfx.lcl]/ServerInstance[@Name='MSSQLSERVER']/ServerProtocol[@Name='Tcp']"); ServerProtocol tcp = host.GetSmoObject(u) as ServerProtocol; if (tcp != null) { Console.WriteLine("{0}", tcp.DisplayName); } But I get an error message stating: "child expressions are not supported." Any ideas what's wrong?

    Read the article

  • android logging sdcard

    - by Abhi Rao
    Hello, With Android-Emulator I am not able to write/create a file on the SD Card (for logging). Here is what I have done so far - Run mksdcard 8192K C:\android-dev\emu_sdcard\emu_logFile - Create a new AVD, when assign emu_logFile to it so that when I view the AVD Details it says C:\android-dev\emu_sdcard\emu_logFile against the field "SD Card" - Here is the relevant code public class ZLogger { static PrintWriter zLogWriter = null; private static void Initialize() { try { File sdDir = Environment.getExternalStorageDirectory(); if (sdDir.canWrite()) { : File logFile = new File (sdDir, VERSION.RELEASE + "_" + ".log"); FileWriter logFileWriter = new FileWriter(logFile); zLogWriter = new PrintWriter(logFileWriter); zLogWriter.write("\n\n - " + date + " - \n"); } } catch (IOException e) { Log.e("ZLogger", "Count not write to file: " + e.getMessage()); } } sdDir.canWrite returns false - please note it not the exception from adb shell when I do ls I see sdcard as link to /mnt/sdcard. When I do ls -l /mnt here is what I see ls -l /mnt ls -l /mnt drwxr-xr-x root system 2010-12-24 03:41 asec drwx------ root root 2010-12-24 03:41 secure d--------- system system 2010-12-24 03:41 sdcard whereas if I go to the directory where I created emu_sdcard - I see a lock has been issued, as shown here C:dir android-dev\emu_sdcard Volume in drive C is Preload Volume Serial Number is A4F3-6C29 Directory of C:\android-dev\emu_sdcard 12/24/2010 03:41 AM . 12/24/2010 03:41 AM .. 12/24/2010 03:17 AM 8,388,608 emu_logFile 12/24/2010 03:41 AM emu_logFile.lock 1 File(s) 8,388,608 bytes 3 Dir(s) 50,347,704,320 bytes free I have looked at these and other SO questions Android Emulator sdcard push error: Read-only file system (2) Not able to view SDCard folder in the FileExplorer of Android Eclipse I have added the following to AndroidManifest.xml **uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" ** Please let me know your thoughts - what am I missing here? Why does canWrite return false? What should I do to add permissions to sdcard?

    Read the article

  • Linux: How to find all serial devices (ttyS, ttyUSB and others)?

    - by Thomas Tempelmann
    What is the proper way to get a list of all available serial ports/devices on a Linux system? In other words, when I iterate over all devices in /dev/, how do I tell which ones are serial ports in the classic way, i.e. those usually supporting baud rates and RTS/CTS flow control? The solution would be coded in C. I ask because I am using a 3rd party library that does this clearly wrong: It appears to only iterate over /dev/ttyS*. The problem is that there are, for instance, serial ports over USB (provided by USB-RS232 adapters), and those are listed under /dev/ttyUSB*. And reading the Serial-HOWTO at Linux.org, I get the idea that there'll be other name spaces as well, as time comes. So I need to find the official way to detect serial devices. Problem is that there appears none documented, or I can't find it. I imagine one way would be to open all files from /dev/tty* and call a specific ioctl() on them that is only available on serial devices. Would that be a good solution, though?

    Read the article

  • With Apache/mod_wsgi how can I redirect to ssl and require Auth?

    - by justin
    I have a Media Temple DV server hosting dev.example.com with django mounted at /. There is a legacy directory in my httpdocs I need to continue to serve at /legacy. But for this directory I need to redirect anyone coming over http over to https, then prompt for http basic auth. In the virtual host conf, I'm pointing the root to a django application: WSGIScriptAlias / /var/django-projects/myproject/apache/django.wsgi <Directory /var/django-projects/myproject/apache> Order allow,deny Allow from all </Directory> Then I alias the legacy directory. Alias /legacy/ /var/www/vhosts/example.com/subdomains/dev/httpdocs/legacy/ <Directory /var/www/vhosts/example.com/subdomains/dev/httpdocs> Order deny,allow Allow from all RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteRule ^(.*)$ https://dev.example.com/$1 [R,L] </Directory> This works. It isn't served by django, and the url redirects to https. However, it serves httpdocs/legacy instead of httpsdocs/legacy (where I have an .htaccess that prompts for auth.) Any idea of how I can manage this?

    Read the article

  • Internet Explorer cannot 'fully' load ActiveX Control

    - by K Browne
    Context I am migrating an installer for an ActiveX control from Per-Machine to Per-User. I did this by programming the installer write to HKCU\Software\Classes instead of HKLM\Software\Classes. Problem On my machine (Windows 7 with UAC Enabled), the ActiveX control successfully loads. On the other windows 7 test machines (one with UAC enabled, one with UAC disabled), the control 'partially' loads. What is Partially? When a user visits a page with the ActiveX control, Internet Explorer displays a warning message in a yellow bar on the top of the window. If you click the 'Run add-on' button in the bar, the control becomes visible and begins to run, but Javascript code that tries to access properties of the control return the error: Library not registered. Differences between machines On the dev machine reads from HKCR\CLSID\<GUID> succeed while on the test machines these reads fail. Reads from HKCU succeed on both dev and test machines. Reads from HKLM fail on both test and dev machines. (I collected reads using Sysinternals Process Monitor) Strangely, the keys that Internet Explorer fails to read are clearly visible if I use regedit to view HKCR\CLSID\<GUID> on the test machines. Question What can I do to get the per-user control to load on the test machines? What could cause this difference between the dev machine and the test machines? Why can I see the key in HKCR with RegEdit but Internet Explorer cannot see the key? Any help is appreciated. Thank you.

    Read the article

  • Python import error: Symbol not found, but the symbol <s>is</s> *is not* present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) EDIT: okay, as James points out, the U means that the symbol is undefined but required by the object file. With some more digging I've noticed (where I should have looked first...) these linker errors during compilation: CC=g++ CXX=g++ g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -O3 -I../.. -I../.. -I/usr/local/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -O2 -I/usr/local/include -std=c++98 -pipe -fno-gnu-keywords -fvisibility-inlines-hidden -o SsrcSpread.o -c SsrcSpread.cc CC=g++ CXX=g++ /bin/sh ../../libtool --tag=CXX --mode=link g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python \ -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -L../../ssrc -lssrcspread -L/usr/local/lib -ltspread-core -o _spread.so SsrcSpread.o mkdir .libs g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -o _spread.so SsrcSpread.o -Wl,-bind_at_load -L/Dev/libssrcspread-1.0.6/ssrc /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a -L/usr/local/lib -ltspread-core ld: warning: in ~/Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (i386) I'm also not entirely sure that the 10.4 sdk is the right one for compiling python modules (but switching to 10.6 didn't seem to help).

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >