Search Results

Search found 4846 results on 194 pages for 'rpg master'.

Page 120/194 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • Considering building a new system; but undecided about chassis

    - by J.C. Bengtson
    I'm considering a new system build, and was thinking that this particular motherboard has features I need and like: http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=667651&pagenumber=1&RSort=1&csid=ITD&recordsPerPage=5&body=REVIEWS#CustomerReviewsBlock .. but am unsure which model chassis to pair it with. I'd strongly prefer something from Cooler Master, as I'm a fan of their products, but am having a hard time deciding, and also don't want to get into some odd situation where the board doesn't properly fit. I plan on having two optical drives (5.25"), two internal HDs (3.5"), and will likely go with an SLI setup of 2 or possibly even 3 cards, so I'd need a chassis that is roomy enough to accomodate all of that, as well as the motherboard itself. Based on the stock available at that same site, do you all have any suggestions? The larger, the better, as I hate having components crammed together. Your help is most appreciated!

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution EDIT I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • How to have supervisord follows the new unicorn process after USR2 rolling restart?

    - by ybart
    I have configured supervisord to track my unicorn server process. When I send USR2 process, this performs a rolling restart. After this operation the old unicorn master have restarted and then changed PID. This caused supervisor to lose track of the unicorn process considering it as EXITED. How can I have supervisord to follow the new unicorn process after this operation ? Unicorn has a PID file available, but I have not found an option in supervisord configuration for this. An other option would be to have supervisord to send itself the USR2 signal, but I don't know how to perform this and whether it will prevent my problem from occurring.

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • Samba - Is my server vulnerable to CVE-2008-1105?

    - by Joao Heleno
    Hi! I have a CentOS server that is running Samba and I want to verify the vulnerability addressed by CVE-2008-1105. What scenarios can I build in order to run the exploit that is mentioned in http://secunia.com/advisories/cve_reference/CVE-2008-1105/? http://secunia.com/secunia_research/2008-20/advisory/ says that "Successful exploitation allows execution of arbitrary code by tricking a user into connecting to a malicious server (e.g. by clicking an "smb://" link) or by sending specially crafted packets to an "nmbd" server configured as a local or domain master browser." More info: http://www.samba.org/samba/security/CVE-2008-1105.html http://secunia.com/secunia_research/2008-20/advisory/

    Read the article

  • MySQL replication ignore data changes but not table structure changes

    - by Ed Manet
    Is there a way to setup MySQL replication so that CREATE TABLE and ALTER TABLE statements get replicated but INSERT, DELETE and UPDATE statements do not? I've got replication working fine and have several tables that are ignored as per the requirements. But we have a requirement that the slaves have an empty copy of the ignored table. We create those empty copies before we start replicating. Since the table is ignored, table structure changes don't get passed down from the master to the slave's empty copy. I know it's a strange requirement.

    Read the article

  • Fabric doesn't launch Nginx remotely

    - by endofu
    I want to be able to start and stop an nginx server on an Ubuntu EC2 instance with Fabric. I have this two scripts in my fabfile.py: def start_nginx(): sudo('/etc/init.d/nginx start') #also tried this: run('sudo /etc/init.d/nginx start') def stop_nginx(): sudo('/etc/init.d/nginx stop') The start_nginx() seemingly runs without errors (* Starting Nginx Server.../ ...done.) but doesn't start the server (or it dies immediately). If I SSH into the instance this starts nginx perfectly: sudo /etc/init.d/nginx start The stop_nginx() Fabric script stops the server remotely. I compiled nginx from source, using this http://nginx.org/download/nginx-1.1.9.tar.gz and using this script in /etc/init.d: https://github.com/JasonGiedymin/nginx-init-ubuntu/blob/master/nginx. The only thing I modified is this line: DAEMON=/usr/local/sbin/nginx to DAEMON=/usr/sbin/nginx because that's the path I used when I ./configure-d my compile. Does anyone have any idea why the init script behaves differently being called from Fabric?

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a DFI CM33-TL/G ATX (identified using SiSoft Sandra) motherboard hosting an Intel Celeron 1.3GHz CPU, 768Mb PC133 SDRAM, a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board based on the NEC uPD720102 chipset containing 3 external and 1 internal USB sockets. It's also hosting a 1.44Mb floppy drive on FDD0, a new 80Gb Western Digital hard drive running as master on IDE0 and a Panasonic DVD+/-RW running as master on IDE1. All this is sitting in a slimline case running off a Macron Power MPT-135 135W Flex power supply. The motherboard is running a version of Award BIOS 05/24/2002-601T-686B-6A6LID4AC-00. Could this be updated? If so, from where? I've raked through the manufacturer's website but can't find any hint of downloads for either drivers or BIOS updates. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • Reverse lookup SERVFAIL

    - by Quan Tran
    I just set up a DNS server and a web server using Virtualbox. The IP address of the DNS server is 192.168.56.101 and the web server 192.168.56.102. Here are my configuration files for the DNS server: named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //query-source address * port 53; //forward first; forwarders { 8.8.8.8; 8.8.4.4; }; listen-on port 53 { 127.0.0.1; 192.168.56.0/24; }; allow-query { localhost; 192.168.56.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity debug 10; print-category yes; print-time yes; print-severity yes; }; }; zone "quantran.com" in { type master; file "named.quantran.com"; }; zone "56.168.192.in-addr.arpa" in { type master; file "named.192.168.56"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; named.quantran.com: $TTL 86400 quantran.com. IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) IN NS dns1.quantran.com. dns1.quantran.com. IN A 192.168.56.101 www.quantran.com. IN A 192.168.56.102 named.192.168.56: $TTL 86400 $ORIGIN 56.168.192.in-addr.arpa. @ IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) ; minimum IN NS dns1.quantran.com. 101.56.168.192.in-addr.arpa. IN PTR dns1.quantran.com. 102 IN PTR www.quantran.com. When I try a normal lookup from the host (I configured so that the only nameserver the host uses is the DNS server 192.168.56.101): quan@quantran:~$ host www.quantran.com www.quantran.com has address 192.168.56.102 quan@quantran:~$ host dns1.quantran.com dns1.quantran.com has address 192.168.56.101 But when I try a reverse lookup: quan@quantran:~$ host -v 192.168.56.101 192.168.56.101 Trying "101.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 101.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms quan@quantran:~$ host -v 192.168.56.102 192.168.56.101 Trying "102.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms So why can't I perform a reverse lookup? Anything wrong with the zone configuration files? Thanks in advance :) Oh, here is the output from the log file /var/named/data/named.run when I perform the reverse lookup: quan@quantran:~$ host 192.168.56.102 192.168.56.101 Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) /var/named/data/named.run: 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: UDP request 02-Jun-2014 15:18:11.950 client: debug 5: client 192.168.56.1#51786: using view '_default' 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: request is not signed 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: recursion available 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: query 02-Jun-2014 15:18:11.950 client: debug 10: client 192.168.56.1#51786: ns_client_attach: ref = 1 02-Jun-2014 15:18:11.950 query-errors: debug 1: client 192.168.56.1#51786: query failed (SERVFAIL) for 102.56.168.192.in-addr.arpa/IN/PTR at query.c:5428 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: error 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: send 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: sendto 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: senddone 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: next 02-Jun-2014 15:18:11.951 client: debug 10: client 192.168.56.1#51786: ns_client_detach: ref = 0 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: endrequest 02-Jun-2014 15:18:11.951 client: debug 3: client @0xb537e008: udprecv Also, I made some changes to the log section in named.conf.

    Read the article

  • How to keep groups when pulling with git

    - by mimrock
    I have a staging site that is a working directory of a git repository. How to set up git to let a developer pull out a branch or release without changing the group of the modified files? An example. Let's say I have two developers, robin and david. They are both in git-users group, so initially they can both have write permissions on site.php. -rw-rw-r-- 1 robin git-users 46068 Nov 16 12:12 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git After robin-server1$ git pull origin master: -rw-rw-r-- 1 robin robin 46068 Nov 16 12:35 site.php drwxrwxr-x 8 robin git-users 4096 Nov 16 14:11 .git And david do not have write permissions on site.php, because the group changed from 'git-users' to 'robin'. From now on, david will get a permission denied, when he tries to pull to this repository.

    Read the article

  • Merging multiple versions of same excel spreadsheet

    - by GrinReaper
    So here's the situation: I have multiple versions of the same spreadsheet-- each one has the exact same row and column labels. The difference between any two given spreadsheets is that data in one spreadsheet shouldn't be in the other (but sometimes it might.) Is there anyway to merge all of them into a "master copy" (or just a blank version) of the spreadsheet? (basically, using the data from various versions of that worksheet to fill out the main one) Copy-pasting is extremely tedious, and doesn't allow me to copy blocks of rows IF the row numbering is non-contiguous. (For example, Rows 1, 2, 3, 6 are in a block, but row 4 and 5 just don't exist.) Ideas? Googling hasn't turned up anything that seemed directly relevant to this problem.

    Read the article

  • How can I connect to my ACT database to export data?

    - by Adam Gessel
    I am trying to export data from an MSSQL server that ACT uses. It is ACT 2005. I have tried tons of different things, from trying to starting the MSSQL server in single user mode (still can't login), I have tried copying the mdf files from it and putting it on another server (it complains about having the same name as another database for master.mdf and almost every other file), I have tried putting Administrator in the group that the MSSQL instance runs under, and nothing seems to work! Can anybody with experience with this help me out? Thanks!

    Read the article

  • Cloning Windows 7 installation from MBR to GPR drive and make it bootable

    - by Nelluk
    I've seen threads on similar topics - such as this one - but the answers never seem to solve how to make it bootable. I have Win 7 64-bit on a PC installed on a 2tb MBR volume. The motherboard is UEFI compatible. I just installed a secondary internal 3TB drive which will be partitioned as GPT. Is there a relatively easy way to clone my installation over to the new drive and have that drive be bootable? I have used EaseUS Partition Master to clone the C volume to the D volume, but that would not boot and I assume the issue is that one is MBR and one is GPT. Is there a process to do this?

    Read the article

  • Simple, manageable DNS on EC2?

    - by dkulchenko
    I'm working on a large network of servers sitting on EC2, and need a way for the servers to know about each other's locations in the cloud. I thought the simplest way would be to use DNS, because if I replace the EC2 instance, I simply update the DNS record, and the rest of the servers will know about it (with names like users.db.mysoft.com, routing.mysoft.com, cluster1.memcached.mysoft.com). I'm considering setting up a master DNS server on a micro/small instance to accommodate this. I'd preferably need something that's as simple as a key-value store (hostname - IP) into which the platform could remotely add/remove entries. Can I do this with BIND? Or is there a better solution?

    Read the article

  • LDAP (slapd) creating users with access to specific trees

    - by Josh
    I am setting up a CentOS server with Virtualmin and Postfix, and I am trying to use LDAP to store unix users, groups, Postfix aliases and virtual domains. I am following the instructions from Webmin's site. I have created an LDAP domain and configured Postfix to fetch Aliases and Virtual Domains from LDAP, but in order to do so I had to configure postfix to authenticate with the master LDAP account, cn=Manager,dc=mydomain,dc=com. This seems like a terrible idea because that account has access to the Users and Groups, which postfix does not need access to. How can I create a new LDAP account for Postfix which only has access to the LDAP trees Postfix needs?

    Read the article

  • mail server administration

    - by kibs
    MY postfix does not show that it is listening to the smtp daemon getting mesaage below: The message WAS NOT relayed Reporting-MTA: dns; mail.mak.ac.ug Received-From-MTA: smtp; mail.mak.ac.ug ([127.0.0.1]) Arrival-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Original-Recipient: rfc822;[email protected] Final-Recipient: rfc822;[email protected] Action: failed Status: 5.4.0 Remote-MTA: dns; 127.0.0.1 Diagnostic-Code: smtp; 554 5.4.0 Error: too many hops Last-Attempt-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Final-Log-ID: 23434-08/A38QHg8z+0r7 undeliverable mail MTA BLOCKED OUTPUT FROM lsof -i tcp:25 command master 3014 root 12u IPv4 9429 TCP *:smtp (LISTEN) (Postfix as a user is missing )

    Read the article

  • How do I setup unison to sync a folder one way

    - by Rob
    I have a 1tb NAS that has a 1tb usb external hard attached I have prepared the file system on the usb disk and mounted it I want to 100% sync my data from my nas to the usb disk - but I want it to be incremental and only have the NAS as the 'master' - eg if a file changes on the usb external hard drive I want it to ignore this change as its not the live version (not that I think the files will change on the usb disk but im paranoid the live could get overwritten) Also if a file gets deleted on live I want to retain the deleted file on the usb disk Can unison sync one-way and achive the above for me? if so with simply unison sorce/ target/ Work? Thanks Rob

    Read the article

  • Asp.net with RegularExpression problem

    - by Eyla
    Greetings, I'm try to do valdation textbox input to valdate a phone number. I have a asp.net textbox and checkbox. the defualt is to validate a us phone number and when I check the checkbox I should change the RegularExpression and error message to validate an international phone using my own RegularExpression. I have no problem to validate the international phone but the problem is when validating the usa phone number I'm always getting error message that it is invalde phone number. I used diffrent RegularExpression but did not work. Please look at my code and davice me. Regards, ! ..................... ASP.net Code ..................... <%@ Page Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="UpdateContact.aspx.cs" Inherits="IMAM_APPLICATION.UpdateContact" Title="Untitled Page" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <script src="js/jquery-1.4.1-vsdoc.js" type="text/javascript"></script> <script src="js/jquery.validate.js" type="text/javascript"></script> <script src="js/js.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { ValidPhoneHome("#<%= chkIntphoneHome%>"); $("#aspnetForm").validate({ // debug: true, rules: { "<%=txtHomePhone.UniqueID %>": { phonehome: true } }, errorElement: "mydiv", wrapper: "mydiv", // a wrapper around the error message errorPlacement: function(error, element) { offset = element.offset(); error.insertBefore(element) error.addClass('message'); // add a class to the wrapper error.css('position', 'absolute'); error.css('left', offset.left + element.outerWidth()); error.css('top', offset.top - (element.height() / 2)); } }); }) </script> <div id="mydiv"> <asp:CheckBox ID="chkIntphoneHome" runat="server" Text="Internation Code" Style="position: absolute; top: 620px; left: 700px;" onclick=" ValidPhoneHome(this)" /> <asp:TextBox ID="txtHomePhone" runat="server" Style="top: 650px; left: 700px; position: absolute; height: 22px; width: 128px" ></asp:TextBox> </div> </asp:Content> ............................. js.js File ................... var RegularExpression; var USAPhone = /(^[a-z]([a-z_\.]*)@([a-z_\.]*)([.][a-z]{3})$)|(^[a-z]([a-z_\.]*)@([a-z_\.]*)(\.[a-z]{3})(\.[a-z]{2})*$)/i; var InterPhone = /^\d{9,12}$/; var errmsg; function ValidPhoneHome(sender) { if (sender.checked == true) { RegularExpression = InterPhone; errmsg = "Enter 9 to 12 numbers as international number"; } else { RegularExpression = USAPhone; errmsg = "Enter a valid number"; } jQuery.validator.addMethod("phonehome", function(value, element) { return this.optional(element) || RegularExpression.test(value); }, errmsg); }

    Read the article

  • Using Excel data in Microsoft Publisher

    - by TK
    I have never worked in Microsoft Publisher. To build the presentation we're having to input the same information from a microsoft excel master. For instance- My excel has these columns: Item Title, Item Description, Item Dimensions, Notes, Created Date From there, I'm having the RE-type the information underneath a picture of the item in powerpoint (or publisher) in order to present to the client. So I'm retyping the item name, description, dimensions, etc. I'm also reformatting slides each time I do this. I know there's a way to streamline this process, to build a powerpoint and/or something in publisher that will bring in the data needed based on a merge (or maybe macro), but I haven't been able to figure out how. Any suggestions?

    Read the article

  • Can I force NFS automounts to use NFSv3?

    - by Steve
    I have a linux server that is exporting NFSv4 as well as NFSv3. I have a Fedora14 client that is defaulting to NFSv4 when automounting NFS shares off of the linux server, and it seems to be causing some problems. All my other linux clients on the network are mounting via NFSv3 without issue, so is there a way I can tell automount to mount the share via v3? I am pulling my automount maps via LDAP, with an entry in my /etc/auto.master file like so: +auto_master, so I assume it's a bit different than listing options with a regular automount map? (.i.e. /home --nfsvers=3 fileserver:/DATA)

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • ${extension} empty after catch-all alias in Postfix

    - by Paul Wagener
    I want a setup where an e-mailaddress like [email protected] redirects mail to the folder foo. I've already got dovecot configured and tested. It is called by postfix with this line in master.cf: dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${user}@${nexthop} -n -m ${extension} I expect ${extension} to expand to 'foo' but it is always empty. I've added recipient_delimiter = + to my main.cf. How can I get it to work? Update: I've got a catch-all alias that redirects @domain.com to [email protected]. It seems that the extension is empty because of this. So the question becomes: Can I have a catch-all so that [email protected] redirects to [email protected] without explicitly defining either the random or the ext part?

    Read the article

  • How to configure catch-all in Exchange2010 hub-transport environment?

    - by Itay Levin
    Im getting the delivery failed from the post master reply. i don't want it. because then poeple can find out all my real users on the exchange. also, i have a lot of users (10K) in my application - and i don't want to create a mailbox for each user. is it possible to get this done in ex2010 sp1. hub-transport configuration? or i must use edge-transport as indicated in http://technet.microsoft.com/en-us/library/bb691132(EXCHG.80).aspx

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >