Search Results

Search found 5062 results on 203 pages for 'david the man'.

Page 175/203 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How does one change the UUID of a Volume on Mac OS X 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • How can I do an SELINUX filesystem relabel without rebooting first?

    - by Skaperen
    I can touch the file /.autorelabel and reboot and during the initialization coming back up it will do the SELINUX relabel for me. But I want to do this in a different situation where the system has just been copied to a hard drive image. I can chroot to the originating file tree, or chroot to the just populated device image and run it. I just can't find anything that says what to be run. This image is being made into an AMI on AWS EC2, and contains CentOS 6.3. But the time it takes to relabel is too long (6 minutes or more). I want to move the relabel to the image build where the extra time is not an issue (because it happens once instead of every time an AMI is launched). I can make this relabel be the very last thing just before the filesystem is unmounted for the last time until it becomes an AMI and will launch. I just need to know what to call to do it. I have searched man pages with no luck. I have searched system init scripts but where /.autorelabel is detected, it is unclear what is happening. Documents like http://www.centos.org/docs/5/html/5.2/Deployment_Guide/sec-sel-fsrelabel.html only tell how to do things that still really do the work after a reboot. I need to have the work doing BEFORE the "reboot" (unmount, build AMI, and launch ready to go). The big point is ... yes there will be a reboot ... but I want the relabel work to be done before that so it won't be done every time an AMI is launched (because it takes so long).

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Unclaimed user group prizes, Live meeting on Monday, Next weeks UG, SQLRelay and more prizes

    - by Testas
      Hi Everyone Firstly I want to let you know that I finally found the LINQ book prize winners and the list of people at the bottom of this email are owed a LINQ book. This will be given out at next week’s UG meeting Live meeting with Carolyn Chau, Program Manager at Microsoft on Monday! It is very rare that we get the opportunity to have a Live meeting with a Program Manager in Redmond. Carolyn Chau will be presenting PowerView next Monday at 8pm. Live meeting details can be found on http://sqlserverfaq.com/events/388/Live-Meeting-on-SQL-Server-2012-PowerView-with-Carolyn-Chau-Principal-Program-Manager-in-the-Reporting-Services-in-association-with-SQLPASS-SQLServerFAQ-and-SQLBits.aspx Next week’s UG!! We welcome Mark Broadbent to Manchester next week where he will be presenting his session on SQL Server 2012 on Windows Core. We also hand out the unclaimed prizes. Register at http://sqlserverfaq.com/events/369/Thursday-night-meeting-at-BSS-with-Chris-TestaONeill-and-Mark-Broadbent.aspx Chris Webb is in Manchester!!! Chris Webb will be speaking at the Manchester SQL Server UG on 4th July. He will also be running his Real World Cube Design and Performance Tuning with Analysis Services between the 3rd – 5th July. If you want to attend then you can sign up at the link below http://www.technitrain.com/coursedetail.php?c=13&trackingcode=FAQ SQLRelay and a Special Prize and Jamie Thomson comes to Manchester!!!! SQLRelay takes place in Manchester on the 22nd. We have a special guest, after years of asking Jamie Thomson is coming to Manchester. The SSIS Junkie will be gracing us with his presence with a talk on SSIS 2012. Also we have a prize. Know a friend or colleague who would benefit from SQLRelay? Get them to register at www.sqlserverfaq.com and then register for the event http://sqlserverfaq.com/events/373/ALL-DAY-TUESDAY-EVENT-12-hours-of-SQL-Server-2012-at-the-SQLRelay-meeting-at-the-COOP-Manchester.aspx Then send an email to [email protected] with the subject of SQLFriend with the name of your friend. If you are both at the SQLRelay event on the day and your names are pulled out of the hat you will win a PASS 2011 DVD and your friend will win the “Best of PASS DVD 2011” worth  $1000 courtesy of SQLPASS. The draw will take place between 4.30pm – 5pm on the day. SQLBits feedback!!!!! Attended SQLBits? We really need to know your opinion. Please fill out the survey for the days you attended If you attended any of the days at SQLBits please can you all fill out the following survey http://www.sqlbits.com/SQLBitsX If you attended the Thursday Training day then please fill out the following survey: http://www.sqlbits.com/SQLBitsXThursday If you attended the Friday Deep Dives day then please fill out the following survey: http://www.sqlbits.com/SQLBitsXFriday If you attended the Saturday Community day then please fill out the following survey: http://www.sqlbits.com/SQLBitsXSaturday Thanks   Chris and Martin   LINQ BOOK winners Andrew Birds Chris Kennedy Dave Carpenter David Forrester Ian Ringrose James Cullen James Simpson Kevan Riley Kirsty Hunter Martin Bell Martin Croft Michael Docherty Naga Anand Ram Mangipudi Neal Atkinson Nick Colebourn Pavel Nefyodov Ralph Baines Rick Hibbert saad saleh Simon Enion Stan Venn Steve Powell Stuart Quinn

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • SOCKS5 proxy only, git wants to use ssh to xx.xx.xx.xx - forward? - mac os

    - by AlexAtNet
    I have SOCKS5 proxy configured and want to work with the git repository, originally cloned from ssh:... So when it tries to connect the error "Network is unreachable" appears. There are a few possible solutions: Use GIT URL rewriting and use https:// with proxy option. Probably should work well for github repositories. Use port forwarding and something like iptables/ipfw to rewrite address xx.xx.xx.xx:22 to 127.0.0.1:10yyy I'm trying to do #2. I have limited knowledge in this area, but know that I should use something like iptables. But then I discovered that on a Mac I should use ipfw. And then in the ipfw man page it told me "This utility is DEPRECATED. Please use pfctl(8) instead". So what I want to do is to rewrite xx.xx.xx.xx:22 to 127.0.0.1:10yyy and remove this rewriting. As I read, the pf.conf line should be rdr proto tcp from 127.0.0.1 to xx.xx.xx.xx port 22 -> 127.0.0.1 port 10yyy But how to add (and remove) this rule from command line?

    Read the article

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • bind9 dlz/mysql at ubuntu segfault libmysqlclient.so

    - by Theos
    I have a big problem. I installed the bind9 nameserver to three different computer. two Ubuntu 10.04.4 LTS, and one Ubuntu 11.10 I compiled it 9.7.0, 9.7.3, 9.9.0 with this method: ./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var \ --mandir=/usr/share/man --infodir=/usr/share/info \ --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static \ --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld \ --with-dlz-mysql=yes --with-dlz-bdb=no \ --with-dlz-filesystem=yes --with-geoip=/usr make make install After the set up for dlz/mysql, the BIND server is working perfetctly until 5-30 minute long. Ahter i got segfault. I resolve temporaly the problem with a simple process watchdog, and if the named is stopped, the watchdog is restart it, but this is not a good idea in long therm. My log output is: messages: Apr 13 19:33:51 dnsvm kernel: [ 8.088696] eth0: link up Apr 13 19:33:58 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.082572] named[1027]: segfault at 88 ip b71c4291 sp b5adfe30 error 4 in libmysqlclient.so.16.0.0[b714e000+1aa000] Apr 13 19:35:08 WATCHDOG: named not running. Restarting Apr 13 19:35:08 dnsvm kernel: [ 87.457510] named[1423]: segfault at 68 ip b71d6122 sp b52f0a40 error 4 in libmysqlclient.so.16.0.0[b7160000+1aa000] Apr 13 19:35:09 WATCHDOG: named not running. Restarting Apr 13 19:41:56 dnsvm kernel: [ 494.838206] named[1448]: segfault at 88 ip b731c291 sp b5436e30 error 4 in libmysqlclient.so.16.0.0[b72a6000+1aa000] Apr 13 19:41:57 WATCHDOG: named not running. Restarting Apr 13 19:57:26 dnsvm kernel: [ 1424.023409] named[2976]: segfault at 88 ip b72d1291 sp b6beee30 error 4 in libmysqlclient.so.16.0.0[b725b000+1aa000] Apr 13 19:57:26 WATCHDOG: named not running. Restarting Apr 13 20:11:56 dnsvm kernel: [ 2294.324663] named[6441]: segfault at 88 ip b7357291 sp b6473e30 error 4 in libmysqlclient.so.16.0.0[b72e1000+1aa000] Apr 13 20:11:57 WATCHDOG: named not running. Restarting syslog: http://pastebin.com/hjUyt8gN the first server is a native, normal x64 server (u1004lts), the second is virtualised server (u11.10) the third is also virtualised (10.04lts) This servers is only for dns providing with mysql server db. But the problem is be with all server, and all bind version. named.conf: http://pastebin.com/zwm1yP7V Can anybody help me, or any good idea?

    Read the article

  • VNC as a Support Tool Over the Internet

    - by dosboy
    I'd like to set up an environment where I can use VNC to remotely support my clients over the internet. No VPNs involved. I've used the UltraVNC repeater in the past, but the problem is that it requires a dedicated Windows server. What I'd like to do is as follows: VNC Client (me) - NAT - Internet - NAT - VNC Server (the person I'm offering support to) I'd basically like the same functionality that the UltraVNC repeater offers, but the only internet environment I have to host something on is a Linux shared server (standard hosting - PHP, Apache, etc.). Requirements: Multiple platform support for both Client and Server - specifically Mac and Windows Allows for connection with multiple NATs involved (Client and Server side) Will allow me to use my existing hosting environment for any repeater that might be involved I believe the way this would work is that the Server (the person I'm offering support to) when online would connect to a listener on the internet. When they needed support I would connect my Client to the same listener, see them connected, and use the listener (man-in-the-middle) to piggyback my Client to connect to their Server. I'm open to using any software (not limiting myself to VNC) but would prefer a FOSS solution (which is why I'm leaning towards VNC). Any advice would be greatly appreciated.

    Read the article

  • Compile PHP 5.3.2 with intl extension on Snow Leopard 10.6.3

    - by fsb
    Does anyone have some tips on compiling PHP's intl extension on PHP? I'm getting compile errors each way I try it and I've been googling for ages and getting nowhere. Any help greatly appreciated. When make gets to the huge gcc command to compile libphp5.bundle, I get the following error: Undefined symbols: "___gxx_personality_v0", referenced from: icu_4_2::MessageFormatAdapter::getArgTypeList(icu_4_2::MessageFormat const&, int&)in msgformat_helpers.o _umsg_parse_helper in msgformat_helpers.o _umsg_format_arg_count in msgformat_helpers.o _umsg_format_helper in msgformat_helpers.o CIE in msgformat_helpers.o ld: symbol(s) not found collect2: ld returned 1 exit status make: *** [libs/libphp5.bundle] Error 1 My compile commands are: MACOSX_DEPLOYMENT_TARGET=10.6 CFLAGS="-arch x86_64 -g -Os -pipe -no-cpp-precomp" CCFLAGS="-arch x86_64 -g -Os -pipe" CXXFLAGS="-arch x86_64 -g -Os -pipe" LDFLAGS="-arch x86_64 -bind_at_load" export CFLAGS CXXFLAGS LDFLAGS CCFLAGS MACOSX_DEPLOYMENT_TARGET ./configure --prefix=/usr \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --sysconfdir=/private/etc \ --with-apxs2=/usr/sbin/apxs \ --enable-cli \ --with-config-file-path=/etc \ --with-libxml-dir=/usr \ --with-openssl=/usr \ --with-zlib=/usr \ --with-bz2=/usr \ --with-curl=/usr \ --with-gd \ --with-jpeg-dir=/src/jpeg/jpeg-local \ --with-png-dir=/usr/X11R6 \ --with-freetype-dir=/usr/X11R6 \ --with-xpm-dir=/usr/X11R6 \ --with-ldap=/usr \ --with-ldap-sasl=/usr \ --enable-mbstring \ --enable-mbregex \ --with-mysql=mysqlnd \ --with-mysqli=mysqlnd \ --with-pdo-mysql=mysqlnd \ --with-mysql-sock=/var/mysql/mysql.sock \ --with-iodbc=/usr \ --enable-shmop \ --with-snmp=/usr \ --enable-soap \ --enable-sockets \ --enable-sysvmsg \ --enable-sysvsem \ --enable-sysvshm \ --with-xmlrpc \ --with-iconv-dir=/usr \ --with-xsl=/usr \ --with-pcre-regex=/src/pcre/pcre-local/usr/local \ --with-pcre-dir=/src/pcre/pcre-local/usr/local \ --with-icu-dir=/usr/local \ --enable-intl export EXTRA_CFLAGS="-lresolv" make

    Read the article

  • Tell VLC where to look for plugins.dat file

    - by puk
    I am trying to build vlc from source (I will include installation script below), but when I try to run vlc I get the following error main libvlc warning: cannot read /home/user/downloads/vlc3/vlc/src/.libs/vlc/plugins/plugins.dat (No such file or directory) Why is it even looking in that non existant directory? The plugins.dat file is in /usr/lib/vlc/plugins/. I tried export VLC_PLUGIN_PATH=/usr/lib/vlc/plugins/ But it still looks in that non existent path. I can create a symbolic link, but that is a terrible way to do it. If in 6 months I delete my downloads folder, all of a sudden my vlc will break. Here is the script I am running to install: ./configure --enable-rpi-omxil --enable-dvbpsi --enable-x264 --enable-xcb --with-x --enable-xvideo --enable-sdl --enable-avcodec --enable-avformat --enable-swscale --enable-mad --enable-a52 --enable-libmpeg2 --enable-dvdnav --enable-faad --enable-vorbis --enable-ogg --enable-theora --enable-mkv --enable-freetype --enable-fribidi --enable-speex --enable-flac --enable-live555 --enable-caca --enable-skins2 --enable-alsa --enable-ncurses --enable-debug --enable-lirc --enable-live555 --enable-shout --enable-taglib --enable-vcdx --enable-realrtsp --enable-svg --enable-dvdread --enable-dc1394 --enable-twolame --enable-dirac --enable-aa --enable-jack --enable-bluray --enable-opencv --enable-sftp --enable-pulse --enable-projectm --enable-vsxu --enable-atmo --enable-glspectrum '--with-extra-libs=/usr/local/lib' '--with-extra-includes=/usr/local/include' '--x-libraries=/usr/local/lib' '--x-includes=/usr/local/include' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' EDIT: I am using the following version: VLC media player 2.2.0-git Weatherwax (revision 2.1.0-git-1168-g5804dd1) And the --plugin-path option is no longer supported.

    Read the article

  • Checksum for Protecting Read-Only Documents

    - by Kim
    My father owns a small business and has to hand over several year's worth of financial documents to his insurance's auditor. He's asked me to go through and make sure everything is "read-only" so the data (the files) absolutely, positively cannot be modified or manipulated (he's a bit paranoid). We're talking about 20,000 documents (emails, spreadsheets, etc.). My first inclination was to place everything inside of one root folder ("mydadsdocs/") and then write a script that recursively traversed its directory subtree and set the file permissions to read-only. But then I got to thinking: that's a lot of work for me to do to satisfy an old man who is just being paranoid, and afterall, if someone really wanted to modify a read-only file, it would be pretty easy to change file permissions anyways, soo.... Is there like a checksum I could run on the root folder, something that was very quick and easy, and that would basically "stamp" the data in that folder so if someone did change it, my father would have someone of knowing/proving it? If so, how? If not, any other recommendations that are quick, cheap (free) and effective?

    Read the article

  • Advice needed: warm backup solution for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a warm backup server for a SQL Server Express instance running a single database? Sitting beside my production SQL Server 2008 Express box I have a second physical box currently doing nothing. I want to use this second box as a warm backup server by somehow replicating my production database in near real time (a little bit of data loss is acceptable). The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping natively, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • samba sync password with unix password on debian wheezy

    - by Oz123
    I installed samba on my server and I am trying to write a script to spare me the two steps to add user, e.g.: adduser username smbpasswd -a username My smb.conf states: # This boolean parameter controls whether Samba attempts to sync the Unix # password with the SMB password when the encrypted SMB password in the # passdb is changed. unix password sync = yes Further reading brought me to pdbedit man page which states: -a This option is used to add a user into the database. This com- mand needs a user name specified with the -u switch. When adding a new user, pdbedit will also ask for the password to be used. Example: pdbedit -a -u sorce new password: retype new password Note pdbedit does not call the unix password syncronisation script if unix password sync has been set. It only updates the data in the Samba user database. If you wish to add a user and synchronise the password that im- mediately, use smbpasswd’s -a option. So... now I decided to try adding a user with smbpasswd: 1st try, unix user still does not exist: root@raspberrypi:/home/pi# smbpasswd -a newuser New SMB password: Retype new SMB password: Failed to add entry for user newuser. 2nd try, unix user exists: root@raspberrypi:/home/pi# useradd mag root@raspberrypi:/home/pi# smbpasswd -a mag New SMB password: Retype new SMB password: Added user mag. # switch to user pi, and try to switch to mag root@raspberrypi:/home/pi# su pi pi@raspberrypi ~ $ su mag Password: su: Authentication failure So, now I am asking myself: how do I make samba passwords sync with unix passwords? where are samba passwords stored? Can someone help enlighten me?

    Read the article

  • Postfix Postscreen: how to use postscreen for smtp and smtps both

    - by petermolnar
    I'm trying to get postscreen work. I've followed the man page and it's already running correctly for smtp. But it I want to use it for smtps as well (adding the same line as smtp in master.cf but with smtps) i receive failure messages in syslog like: postfix/postscreen[8851]: fatal: btree:/var/lib/postfix/postscreen_cache: unable to get exclusive lock: Resource temporarily unavailable Some say that postscreen can only run once; that's ok. But can I use the same postscreen session for both smtp and smtps? If not, how to enable postscreen for smtps as well? Any help would be apprecieted! The parts of the configs: main.cf postscreen_access_list = permit_mynetworks, cidr:/etc/postfix/postscreen_access.cidr postscreen_dnsbl_threshold = 8 postscreen_dnsbl_sites = dnsbl.ahbl.org*3 dnsbl.njabl.org*3 dnsbl.sorbs.net*3 pbl.spamhaus.org*3 cbl.abuseat.org*3 bl.spamcannibal.org*3 nsbl.inps.de*3 spamrbl.imp.ch*3 postscreen_dnsbl_action = enforce postscreen_greet_action = enforce master.cf (full) smtpd pass - - n - - smtpd smtp inet n - n - 1 postscreen tlsproxy unix - - n - 0 tlsproxy dnsblog unix - - n - 0 dnsblog ### the problematic line ### smtps inet n - - - - smtpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp relay unix - - - - - smtp showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache dovecot unix - n n - - pipe flags=DRhu user=virtuser:virtuser argv=/usr/bin/spamc -e /usr/lib/dovecot/deliver -d ${recipient} -f {sender}

    Read the article

  • What's the difference between Host and HostName in SSH Config?

    - by Bill Jobs
    The man page says this: Host Host Restricts the following declarations (up to the next Host keyword) to be only for those hosts that match one of the patterns given after the keyword. If more than one pattern is provided, they should be separated by whitespace. A single `*' as a pattern can be used to provide global defaults for all hosts. The host is the hostname argument given on the command line (i.e. the name is not converted to a canonicalized host name before matching). A pattern entry may be negated by prefixing it with an exclamation mark (`!'). If a negated entry is matched, then the Host entry is ignored, regardless of whether any other patterns on the line match. Negated matches are therefore useful to provide exceptions for wildcard matches. See PATTERNS for more information on patterns. HostName HostName Specifies the real host name to log into. This can be used to specify nicknames or abbreviations for hosts. If the hostname contains the character sequence `%h', then this will be replaced with the host name specified on the command line (this is useful for manipulating unqualified names). The default is the name given on the com- mand line. Numeric IP addresses are also permitted (both on the command line and in HostName specifications). For example, when I want to create an SSH Config for GitHub, what should Host and HostName be respectively?

    Read the article

  • Find out the type of an automounted device

    - by Steve Bennett
    I'm working on a system (Ubuntu Precise) with a mount defined in /etc/fstab as follows: /dev/vdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 Originally I just wanted to find out if it's NFS (due to potential MySQL locking issues). Judging from man mount, it's not: If no -t option is given, or if the auto type is specified, mount will try to guess the desired type. Mount uses the blkid library for guessing the filesystem type; if that does not turn up anything that looks familiar, mount will try to read the file /etc/filesystems, or, if that does not exist, /proc/filesystems. All of the filesystem types listed there will be tried, except for those that are labeled "nodev" (e.g., devpts, proc and nfs). If /etc/filesystems ends in a line with a single * only, mount will read /proc/filesystems afterwards. But, out of curiosity now, how can I find out more about what type of device it actually is? (For context, this is a VM running on OpenStack. The device is a 60Gb allocation mounted from somewhere - but I don't know how.)` EDIT Including answers here: $ mount /dev/vdb on /mnt type ext3 (rw,_netdev) $ df -T /dev/vdb ext3 61927420 2936068 55845624 5% /mnt

    Read the article

  • 554 5.7.1 <mail_addr>: Relay access denied centos postfix

    - by Relicset
    I have problem in send mail from postfix in centos I have following setup mail server postfix for sending mail but I am getting error. As in the link I tried following commands telnet localhost smtp Trying ::1... Connected to localhost. Escape character is '^]'. 220 mydomain.com ESMTP Postfix ehlo localhost 250-mydomain.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:<domain.com> 250 2.1.0 Ok rcpt to:<[email protected]> 554 5.7.1 <[email protected]>: Relay access denied Edit-1 In terminal this works echo TEST | mail -v -s "Test mail" [email protected] my postconf -n shows belog information alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = dummy.com myhostname = dummy.com mynetworks = all mynetworks_style = host myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 What configuration I have to perform to send mails from my server.

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • Templated Razor Delegates – Phil Haack

    - by nmarun
    This post is largely based off of Phil Haack’s article titled Templated Razor Delegates. I strongly recommend reading this article first. Here’s a sample code for the same, so you can have a look at. I also have a custom type being rendered as a table. 1: // my custom type 2: public class Device 3: { 4: public int Id { get; set; } 5: public string Name { get; set; } 6: public DateTime MfgDate { get; set; } 7: } Now I can write an extension method just for this type. 1: public static class RazorExtensions 2: { 3: public static HelperResult List(this IList<Models.Device> devices, Func<Models.Device, HelperResult> template) 4: { 5: return new HelperResult(writer => 6: { 7: foreach (var device in devices) 8: { 9: template(device).WriteTo(writer); 10: } 11: }); 12: } 13: // ... 14: } Modified my view to make it a strongly typed one and included html to render my custom type collection in a table. 1: @using TemplatedRazorDelegates 2: @model System.Collections.Generic.IList<TemplatedRazorDelegates.Models.Device> 3:  4: @{ 5: ViewBag.Title = "Home Page"; 6: } 7:  8: <h2>@ViewBag.Message</h2> 9:  10: @{ 11: var items = new[] { "one", "two", "three" }; 12: IList<int> ints = new List<int> { 1, 2, 3 }; 13: } 14:  15: <ul> 16: @items.List(@<li>@item</li>) 17: </ul> 18: <ul> 19: @ints.List(@<li>@item</li>) 20: </ul> 21:  22: <table> 23: <tr><th>Id</th><th>Name</th><th>Mfg Date</th></tr> 24: @Model.List(@<tr><td>@item.Id</td><td>@item.Name</td><td>@item.MfgDate.ToShortDateString()</td></tr>) 25: </table> We get intellisense as well! Just added some items in the action method of the controller: 1: public ActionResult Index() 2: { 3: ViewBag.Message = "Welcome to ASP.NET MVC!"; 4: IList<Device> devices = new List<Device> 5: { 6: new Device {Id = 1, Name = "abc", MfgDate = new DateTime(2001, 10, 19)}, 7: new Device {Id = 2, Name = "def", MfgDate = new DateTime(2011, 1, 1)}, 8: new Device {Id = 3, Name = "ghi", MfgDate = new DateTime(2003, 3, 15)}, 9: new Device {Id = 4, Name = "jkl", MfgDate = new DateTime(2007, 6, 6)} 10: }; 11: return View(devices); 12: } Running this I get the output as: Absolutely brilliant! Thanks to both Phil Haack and to David Fowler for bringing this out to us. Download the code for this from here. Verdict: RazorViewEngine.Points += 1;

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • What's up with stat on Mac OS X/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Mac OS X 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Unable to get prosody running on Ubuntu 10.04 (lua issues)

    - by user90374
    All this is performed on Ubuntu 10.04.4 LTS Server I installed LUA 5.1.4 following this procedure - http://ubuntuforums.org/showthread.php?t=1874860 I installed prosody following this command (after downloading the package) - sudo dpkg -i prosody_0.8.2-1_i386.deb After installation, I get the following error: I have tried to use as suggested luarock and sudo apt-get install to fix these. But still it keeps showing me these errors. Selecting previously deselected package prosody. (Reading database ... 59416 files and directories currently installed.) Unpacking prosody (from prosody_0.8.2-1_i386.deb) ... Setting up prosody (0.8.2-1) ... * Starting Prosody XMPP Server prosody ************** Prosody was unable to find luaexpat This package can be obtained in the following ways: Source: www[dot]keplerproject[dot]org/luaexpat/ Debian/Ubuntu: sudo apt-get install liblua5.1-expat0 luarocks: luarocks install luaexpat luaexpat is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find luasocket This package can be obtained in the following ways: Source: www[dot]tecgraf[dot]puc-rio[dot]br/~diego/professional/luasocket/ Debian/Ubuntu: sudo apt-get install liblua5.1-socket2 luarocks: luarocks install luasocket luasocket is required for Prosody to run, so we will now exit. More help can be found on our website, at prosody[dot]im/doc/depends ************ Prosody was unable to find LuaSec This package can be obtained in the following ways: Source: www[dot]inf[dot]puc-rio[dot]br/~brunoos/luasec/ Debian/Ubuntu: prosody[dot]im/download/start#debian_and_ubuntu luarocks: luarocks install luasec SSL/TLS support will not be available More help can be found on our website, at prosody[dot]im/doc/depends [fail] invoke-rc.d: initscript prosody, action "start" failed. dpkg: error processing prosody (--install): subprocess installed post-installation script returned error exit status 1 Processing triggers for man-db ... Processing triggers for ureadahead ... Errors were encountered while processing: prosody Thanks a lot for your patience and answers.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >