Search Results

Search found 4749 results on 190 pages for 'john stewart'.

Page 15/190 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • User Productivity Kit - Powerful Packages (Part 2)

    - by [email protected]
    In my first post on packages I described what a package is and how it can be used. I also started explaining some of the considerations that should be taken into account when determining how to arrange your packages. The first is when the files are interrelated and depend on one another such as an HTML file and it's graphics. A second consideration is how the files are used in your outlines. Let's say you're using a dozen Word doc files. You could place them all in a single package or put each Word doc file in a separate package but what's the right thing to do? There are several factors that will influence your decision. To understand the first, let me explain a function of UPK publishing. Take an outline in UPK that has an attachment (concept, frame link, or hyperlink) that points to a file in a package. When you publish this outline, the publishing engine will determine that there is a link to a file in the package and copy the contents of the package to the publishing destination directory. This is done to ensure that any interrelated files are kept together. For the situation where you have an HTML file with links to number of graphics files, this is a good thing. If, however, the package has a dozen unrelated Word doc files and you link to only one of them, all dozen Word documents will be copied to the publishing destination directory.  Whether or not this is a good thing is dependent on two things. First, are all of the files in the package used in the outline that you're publishing? Take an outline that includes links to all of the Word documents in that dozen document package I described earlier. For this situation, you may choose to keep all the files in a single package for convenience. A second consideration is how your organization leverages reuse in UPK. In this context, I'm referring to the link style of reuse such as when you link to the same topic from multiple UPK outlines and changes to the topic appear in both places. Take an example where you have the earlier mentioned dozen Word document package and an outline with a dozen topics in it. Each topic has an attachment pointing to one of the Word documents in the package (frame link, concept, etc.) If you're only publishing this outline, the single package probably works fine but what if you're reusing one of these topics in another outline? As I explained earlier, linking to one file in the package will result in all files in the package being copied to your published output. In this example, linking to one topic in the first outline will result in all dozen Word documents being copied to the published output. This may result in files in the output that you don't want there for business or size reasons. This is a situation in which you should consider placing each of the Word documents in it's own separate package. With each document in it's own package, that link to a single document will result in only that single package and single Word document being copied to the published output. In my last post I had described that packages are documents in the UPK library. When using the multi-user version of the UPK Developer you can leverage standard library capabilities for managing the files in these packages during the development process - capabilities such as check in / check out, history, etc. When structuring your packages take into consideration how the authors are going to be adding, modifying and deleting files from the packages. A single package is a single document in the UPK library. Like any other document in the library, a single user can check out the package and edit it at a time. If you have a large number of files in a single package and these must be modified by many users, you need to consider whether this will cause problems as multiple users compete to update the same package. If the files don't depend on each other consider placing the files in separate packages to reduce contention. I hope you've enjoyed these two posts on how you can leverage the power of packages in your content. In summary, consider the following when structuring your packages: Is the asset a single, standalone file or a set of files that depend on each other? Will all the files always be used together in a single outline or may only some of the files be needed based on how the content is reused across multiple outlines? Will multiple developers need to update the files in a single package or should you break it into multiple packages to reduce contention when checking out the document? We'd like to hear from you on how you're using packages in your content. Please add your comments below! Thank you and I hope these two posts have given you additional insights into how to use packages in your content and structure them for efficient use. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • running multi threads in Java

    - by owca
    My task is to simulate activity of couple of persons. Each of them has few activities to perform in some random time: fast (0-5s), medium(5-10s), slow(10-20s) and very slow(20-30s). Each person performs its task independently in the same time. At the beginning of new task I should print it's random time, start the task and then after time passes show next task's time and start it. I've written run() function that counts time, but now it looks like threads are done one after another and not in the same time or maybe they're just printed in this way. public class People{ public static void main(String[] args){ Task tasksA[]={new Task("washing","fast"), new Task("reading","slow"), new Task("shopping","medium")}; Task tasksM[]={new Task("sleeping zzzzzzzzzz","very slow"), new Task("learning","slow"), new Task(" :** ","slow"), new Task("passing an exam","slow") }; Task tasksJ[]={new Task("listening music","medium"), new Task("doing nothing","slow"), new Task("walking","medium") }; BusyPerson friends[]={ new BusyPerson("Alice",tasksA), new BusyPerson("Mark",tasksM), new BusyPerson("John",tasksJ)}; System.out.println("STARTING....................."); for(BusyPerson f: friends) (new Thread(f)).start(); System.out.println("DONE........................."); } } class Task { private String task; private int time; private Task[]tasks; public Task(String t, String s){ task = t; Speed speed = new Speed(); time = speed.getSpeed(s); } public Task(Task[]tab){ Task[]table=new Task[tab.length]; for(int i=0; i < tab.length; i++){ table[i] = tab[i]; } this.tasks = table; } } class Speed { private static String[]hows = {"fast","medium","slow","very slow"}; private static int[]maxs = {5000, 10000, 20000, 30000}; public Speed(){ } public static int getSpeed( String speedString){ String s = speedString; int up_limit=0; int down_limit=0; int time=0; //get limits of time for(int i=0; i<hows.length; i++){ if(s.equals(hows[i])){ up_limit = maxs[i]; if(i>0){ down_limit = maxs[i-1]; } else{ down_limit = 0; } } } //get random time within the limits Random rand = new Random(); time = rand.nextInt(up_limit) + down_limit; return time; } } class BusyPerson implements Runnable { private String name; private Task[] person_tasks; private BusyPerson[]persons; public BusyPerson(String s, Task[]t){ name = s; person_tasks = t; } public BusyPerson(BusyPerson[]tab){ BusyPerson[]table=new BusyPerson[tab.length]; for(int i=0; i < tab.length; i++){ table[i] = tab[i]; } this.persons = table; } public void run() { int time = 0; double t1=0; for(Task t: person_tasks){ t1 = (double)t.time/1000; System.out.println(name+" is... "+t.task+" "+t.speed+ " ("+t1+" sec)"); while (time == t.time) { try { Thread.sleep(10); } catch(InterruptedException exc) { System.out.println("End of thread."); return; } time = time + 100; } } } } And my output : STARTING..................... DONE......................... Mark is... sleeping zzzzzzzzzz very slow (36.715 sec) Mark is... learning slow (10.117 sec) Mark is... :** slow (29.543 sec) Mark is... passing an exam slow (23.429 sec) Alice is... washing fast (1.209 sec) Alice is... reading slow (23.21 sec) Alice is... shopping medium (11.237 sec) John is... listening music medium (8.263 sec) John is... doing nothing slow (13.576 sec) John is... walking medium (11.322 sec) Whilst it should be like this : STARTING..................... DONE......................... John is... listening music medium (7.05 sec) Alice is... washing fast (3.268 sec) Mark is... sleeping zzzzzzzzzz very slow (23.71 sec) Alice is... reading slow (15.516 sec) John is... doing nothing slow (13.692 sec) Alice is... shopping medium (8.371 sec) Mark is... learning slow (13.904 sec) John is... walking medium (5.172 sec) Mark is... :** slow (12.322 sec) Mark is... passing an exam very slow (27.1 sec)

    Read the article

  • How do I put array into columns

    - by mathew
    $db = mysql_connect("", "", "") or die("Could not connect."); mysql_select_db("",$db)or die(mysql_error()); $sql = "SELECT * FROM table where 1"; $pager = new pager($sql,'page',6); while($row = mysql_fetch_array($pager->result)) { echo $row['persons']."<br>"; } mysql_close($db); above code output : Mathew Thomas John Stewart Watson Kelvin What I need is it should split inot multiple columns say: Mathew Stewart Thomas Watson John Kelvin HOw do I do this??

    Read the article

  • script not found or unable to stat: /usr/lib/cgi-bin/php-cgi

    - by John
    I have just seen a new series of error in the /var/log/apache2/error.log [Thu Oct 31 06:59:04 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php [Thu Oct 31 06:59:08 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php5 [Thu Oct 31 06:59:09 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php-cgi [Thu Oct 31 06:59:14 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php.cgi [Thu Oct 31 06:59:14 2013] [error] [client 203.197.197.18] script not found or unable to stat: /usr/lib/cgi-bin/php4 This server is running Ubuntu 12.04lts. I have never seen this sort of attack before, should i be concerned or securing my system in any way for them? Thanks, John

    Read the article

  • Untrusted file not showing unblock button windows 7

    - by Stewart Griffin
    I downloaded a dll but cannot use it as it is considered untrusted. I opened it using: Notepad.exe filepath\filename:zone.identifier and it informed me that that the file was in zone 3. Despite this I do not get an unblock button in the properties page for the file. Not being able to unblock it with this button I instead changed the value in notepad and saved my changes. When I reopen the zone.identifier info it is as I left it. I have set it to both 2 (trusted) and 0 (no information), but still am unable to use the files. Any one have any ideas? If I cannot unblock the files I will investigate turning this blocking off, but as a first step I'd like to try and just unblock this one file. Note: using Windows 7 Ultimate edition. It is when using MSTest from within Visual Studio 2008 that I hit problems.

    Read the article

  • SMTP Relay through exchange

    - by John
    Hi guys, We have a bit of a problem in that we want our printers to email our contractor whenever they develop a fault. The problem is on our corporate network we have no access through the firewall to the internet preventing us to use the external SMTP server. So i suppose the question is can we use our exchange server to do this? IE could I run an SMTP service that would forward to the exchange server which would then send the mail to the contractor? Any ideas welcome! Thanks John

    Read the article

  • Grep a strange acirc character

    - by John Hunt
    I have this character appearing in places in some files I have:  (if you can't see it or it looks like a question mark it's the Acirc character (capital A with a circumflex over it)) I simply want to grep replace this char with a space, however when I do this: grep --color -ri  myproject.php Putty gets very confused, as does grep. As I understand it there's probably a way to use an escaped hex code with grep.. does anyone know how? EDIT: The character is showing up on my web page as a weird <?>. The http headers for the page specify utf-8 as does the meta character set and I still see the strange character. In putty it appears as a space (putty also set to utf-8.) When I copy from vim and paste into grep it simply doesn't find it. Cheers, John

    Read the article

  • What's up with stat on MacOSX/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on MacOSX 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • counting unique values based on multiple columns

    - by gooogalizer
    I am working in google spreadsheets and I am trying to do some counting that takes into consideration cell values across multiple cells in each row. Here's my table: |AUTHOR| |ARTICLE| |VERSION| |PRE-SELECTED| ANDREW GOLF STREAM 1 X ANDREW GOLF STREAM 2 X ANDREW HURRICANES 1 JOHN CAPE COD 1 X JOHN GOLF STREAM 1 (Google doc here) Each person can submit multiple articles as well as multiple versions of the same article. Sometimes different people submit different articles that happen to be identically named (Andrew and John both submitted different articles called "Golf Stream"). Multiple versions written by the same person do not count as unique, but articles with the same title written by different people do count as unique. So, I am looking to find a formula that Counts the number of unique articles that have been submitted [4] (without having to manually create extra columns for doing CONCATS, if possible) It would also be great to find formulas that: Count the number of unique articles that have been pre-selected (marked "X" in "PRE-SELECTED" column) [2] Count the number of unique articles that have only 1 version [4] Count the number of unique articles that have more than 1 of their versions pre-selected 1 Thank you so much! Nikita

    Read the article

  • What comment-spam filtering service works?

    - by Charles Stewart
    From an answer I gave to another question: There are comment filtering services out there that can analyse comments in a manner similar to mail spam filters (all links to the client API page, organised from simplest API to most complex): Steve Kemp (again) has an xml-rpc-based comment filter: it's how Debian filters comments, and the code is free software, meaning you can run your own comment filtering server if you like; There's Akismet, which is from the WordPress universe; There's Mollom, which has an impressive list of users. It's closed source; it might say "not sure" about comments, intended to suggest offering a captcha to check the user. For myself, I'm happy with offline by-hand filtering, but I suggested Kemp's service to someone who had an underwhelming experience with Mollom, and I'd like to pass on more reports from anyone who has tried these or other services.

    Read the article

  • Most common account names used in ssh brute force attacks

    - by Charles Stewart
    Does anyone maintain lists of the most frequently guessed account names that are used by attackers brute-forcing ssh? For your amusement, from my main server's logs over the last month (43 313 failed ssh attempts), with root not getting as far as sshd: cas@txtproof:~$ grep -e sshd /var/log/auth* | awk ' { print $8 }' | sort | uniq -c | sort | tail -n 13 32 administrator 32 stephen 34 administration 34 sales 34 user 35 matt 35 postgres 38 mysql 42 oracle 44 guest 86 test 90 admin 16513 checking

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • Apache httpd.conf handle multiple domains to run the same application

    - by John Stewart
    So what we are looking for is the ability to do the following: We have an application that can load certain settings based on the domain that it is being accessed from. So if you come from xyz.com we show a different logo and if you come from abc.com we show a different logo. The code is the same, running from same server just detects the domain on the run Now we want to get a dedicated server (any suggestions?) that will enable us to point all the doamins that we want to this server (we change the DNS for the domains to that of our server) and then when the user goes to a certain domain they run the same application. Now as far as I can understand we will need to create a "VirtualHost" in apache to handle this. Can we create a wildcard virtualhost that catches all the domains? I am not an expert with Apache at all. So please forgive if this comes out to be a silly question. Any detailed help would be great. Thanks

    Read the article

  • Mail.app send mail hook

    - by Charles Stewart
    Is there any way to run a script whenever the user tries to send mail? I'm particularly interested in ensuring that outbound mail doesn't have a blank subject line. Solutions that involve plug-ins are welcome!

    Read the article

  • Blue Screen Of Death after Graphics Card Update

    - by John Smith
    I recently installed the game watch dogs After installing it i upgraded the drivers on my graphics card AMD Radion R9 200 series via the catalyst software and since then windows has been crashing on an ad-hoc basis even when sitting idle I have recieved two different sets of errors OXOOOOOOO1 and VIDEO_SCHEDULER_INTERNAL_ERROR I am running windows 8.1 with the driver 14.4 from AMD My PC shouldnt really be having this problem as it is pretty high spec due to my work Any help would be greatly appreciated as it is driving me up the walls a bit [UPDATE 08-06-2014] I have been looking around online and have crashed the PC a few times now trying to recreate the bug I have looked into the event history viewer and gotten the following errors amdacpusrsvc acpusrsvc: IOCTL_ACPKSD_KSD_TO_USR_SVC_SET_FB_APERTURES: FAILED acpusrsvc: GfxMemServiceInitialize: FAILED amdacpusrsvc acpusrsvc: IOCTL_ACPKSD_KSD_TO_USR_SVC_SET_FB_APERTURES: FAILED amdacpusrsvc acpusrsvc: GfxMemServiceInitialize: FAILED I took some advice from the AMD forums to re-install the old drivers to see if they would work. I uninstalled the old ones first and installed directly from the CD I got the warnings below but the driver installed "successfully" The Warnings are amdacpusrsvc acpusrsvc: ConfigureFrameBufferMemory: FAILED. Hopefully this will spark something off for someone Thanks John

    Read the article

  • Guestfish not found on Debian 7 when execute virt-copy-in [on hold]

    - by John Wang
    When I execute virt-copy-in command on a kvm host(Debian7.1), I got error saying "guestfish not found". However according to the dpkg comamnd, guestfish do have been installed: john@sver:~$ dpkg-query -l | grep guestfs* ii libguestfs-perl ... ii libguestfs-tools ... ii libguestfs0 ... What's the problem? Is the libguestfs-tools not the guestfish in Debian? Or it's just a broken dependency in libguestfs-tools in Debian7.1(my KVM Host)?

    Read the article

  • Can I see the SMTP session log when Mail.app connects to an SMPT server?

    - by Charles Stewart
    Problem: I've set up a mail server using SASL authentication, and have given Mail.app (on Mac Os 10.4) the login information it needs to connect. I wrote a test message for it to deliver to my server: the Activity window shows that it tries to deliver the message, but then it simply stops, with no indication of error, except that the test message is left in the Outbox. How can I find out what went wrong? Is there some log file I don't know about?

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • What's up with stat on Mac OS X/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Mac OS X 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev <volfs> 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) <volfs> on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputting the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • When DNS doesn't cache

    - by John Francis
    We've had some odd DNS problems over the past couple of days that I don't fully understand. Some of our DNS names stopped resolving for some of our customers due to some 'unknown' server reconfiguration at our DNS provider. The problem seemed to be intermittent i.e. stopped working and started working within a few minutes over a couple of days. I'm no expert on DNS, but I'd have expected DNS caches to prevent this sort of thing from happening - when we need to change an IP address for a DNS record, it can take 24 hours to propogate, so how can our DNS provider be breaking name resolution intermittently for our customers so easily? Shouldn't the DNS caches kick in here? We had a similar problem about a month ago when one of their nameservers 'decided to reload the DNS database from scratch' - this broke our name resolution too. Again, why didn't the caches satisfy the name resolution requests. Any guesses would be appreciated. John

    Read the article

  • Adding a single 300Gb SCSI drive to poweredge 2850

    - by John Steele
    I have a 2850 setup with 3 146Gb drives, two partitions 1 12GB system with server 2003 sp2 and 1 261Gb Data. I am strapped on disk space on the data partition having to push data around. I wanted to add a 300Gb single drive for lesser critical data, is this possible? Or is it best to add 2 300Gb drives for another RAID 1 configuration? This is my church network and while it is mission critical it is not enterprise so I can take it down for a few hours. Any pointers to documentation or direct help would be greatly appreciated. John

    Read the article

  • How important is sender validation, and what matters?

    - by Charles Stewart
    When I started learning how to configure email, SPF existed but there were doubts about whether it was a good thing, and the value of offering SPF records in DNS. Now it seems that it is widely accepted that some form of well-known sender validation is good practice. Is this really true? Am I being a bad postmaster by not supporting SPF/DKIM/whatever?

    Read the article

  • Email censorship system

    - by user1116589
    I would like to ask you about any censorship / moderation system. Basic workflow of events: Customer sends email to [email protected] from [email protected] ACME administrator receives notification and can moderate email After moderation administrator confirm an email and send it to [email protected] John answears to [email protected] Before the email is send it is moderated again by ACME administrator What is important, that this functionality is easy to do with some CMS/CMF systems. The problem is that we do not want to use an extra domain and force customer to login an extra system. Customer should only use his own email box or desktop email application. Thank you, Tomek

    Read the article

  • How to stop my VPS from picking up ARP reqs it is not supposed to?

    - by Charles Stewart
    Machine: Xen-3.0 image running stable Debian Linux 2.6.18, pretty vanilla. My VPS provider asks me to deal with some trouble my image is causing, namely handling IP addresses it is not supposed to: The problem is that your server seems to be configured to use IPs that have not been appointed to you. Your server responds to ARP requests for the IPs 81.171.111.219 and 81.171.111.218. But you are not allowed to use those. Not explicitly, as far as I can tell! At least, nothing under /etc or /var/tmp mentions these IP addresses. But arp -v says something I can't make sense of: Address HWtype HWaddress Flags Mask Iface 81.171.111.1 ether 00:0C:DB:E3:80:00 C eth0 Entries: 1 Skipped: 0 Found: 1 What is it listening to? The possibilities seem to be: It's not my fault: my VPS providers have overlooked something. What might that be? 81.171.111.1 means I'm happy listening in on ARP requests that I shouldn't be: how do I change this? In any case, what does this mean? I'm looking in completely the wrong place for information on what my image is doing. Where should I be looking?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >