Search Results

Search found 1551 results on 63 pages for 'ben mccann'.

Page 20/63 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • How do I format a text file for IIS Mailroot Pickup so that it sends an e-mail with attachments?

    - by Ben McCormack
    How do I need to format a text file so that it can be read by an SMTP service to send an e-mail that has an attachment? We have a server where we are using II6 SMTP to send mail from a Pickup folder. The goal is to drop a properly formatted text file into Mailroot\Pickup and then the file will be automatically processed and sent to the correct SMTP recipient. For simple files, this works correctly. Here's an example of a simple file that works (domain names changed): From:[email protected] To:[email protected] Subject:Hello World! Test Body Of The E-mail When I drop a text file containing the above contents into the Mailroot\Pickup folder, it sends correctly. However, I haven't been able to figure out how to get an attachment to work. I found some material that explained how to encode an SMTP attachment and another tool for simple base64 encoding conversion. Using the information on those pages, I came up with the following text: From:[email protected] To:[email protected] Subject:Hello World! MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- However, when I place the above text in a file and drop it into Mailroot\Pickup, it doesn't send an attachment correctly. Instead, an e-mail shows up with the following in the body of the e-mail: MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- I can't figure out what I need to do to format the text file so that the SMTP service correctly sends attachments.

    Read the article

  • windows 7 home premium - update to professionel

    - by ben
    Hi there, I bought a new notebook that came with Win7 Home Premium. I know want to sell my home premium license and buy a profesionell one. Is it possible to just change the key in my current running and ready-configuerd win7 home premium, or do I need ti reinstall the system? Thanks for your help!

    Read the article

  • use a SATA to USB cable with MacBook Pro optical drive?

    - by Ben Alpert
    Is there something fundamentally different about hard drives and optical drives regarding how they communicate with the computer? I ordered a SATA to USB adapter from Monoprice and I want to know whether it will work with an SATA optical drive removed from a MacBook Pro. Can anyone shed some light on the subject?

    Read the article

  • CURL on PHP 5.2 works in CLI but not in IIS

    - by Ben Reisner
    I have a Windows 2003 Server running IIS with php 5.2.8, I'm trying to use CURL, and it works in CLI mode (if i execute php.exe) but it does not seem to be registered when running under IIS. The output of PHP info in both CLI and IIS show the same 'Loaded Configuration File', but under IIS it does not give the CURL info box. c:\program files\php\php.exe -i shows ... Loaded Configuration File => C:\Program Files\PHP\php.ini ... curl cURL support => enabled cURL Information => libcurl/7.16.0 OpenSSL/0.9.8i zlib/1.2.3 phpinfo() ... Loaded Configuration File => C:\Program Files\PHP\php.ini ... NOTE: This server also runs php 5.3 in c:\program files\php-5.3.0 and CURL does properly work with that installation.

    Read the article

  • SQL query duplicating results [on hold]

    - by Ben
    I have written a query that results in data being retrieved for the top 5 customers in my table per account manager. Here is the query: SELECT account_manager_id, mgap_ska_id, total FROM (SELECT account_manager_id, mgap_ska_id, mgap_growth + mgap_recovery AS total, @grp_rank := IF(@current_accmanid = account_manager_id, @grp_rank + 1, 1) AS grp_rank, @current_accmanid := account_manager_id FROM mgap_orders ORDER BY total DESC ) ranked WHERE grp_rank <= 5 and here is the result of the query: account_manager_id mgap_ska_id total 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 159840 5062352 61569.21 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 160023 5024546 52244.29 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 159669 5323292 50126.38 As you can see the query is partially working as needed, except Im getting duplicates for mgap_ska_id whereas it should be five individual mgap_ska_id numbers. and here is a sample of my data: mgap_ska_id account_manager_id mgap_growth mgap_recovery 5057810 64154 0 1160.78 5178114 24456 0 5773.42 5292421 160338 0 5146.04 5414091 24408 0 104.14 5057810 64154 0 1160.78 Can anyone see where Ive gone wrong in my query and how/where I might correct the error so I get the 5 top individual customers (mgap_ska_id) instead of the duplicated top single customer?

    Read the article

  • memcached and PHP ... massive lag with sessions

    - by Ben Dauphinee
    I'm working on a new server built with Unbuntu 10.04, running php-fastcgi, nginx, and memcached. phpinfo() script loads and works great, same as a test memcached script. For any script using sessions, page load time rockets through the roof. --- memcached.ini --- extension=memcached.so memcache.hash_strategy = "consistent" memcache.max_failover_attempts = 100 memcache.allow_failover = 1 session.save_handler = memcached session.save_path = "tcp://127.0.0.1:11211?persistent=1&weight=1&timeout=1&retry_interval=15" Let me know if you need to see any other configs.

    Read the article

  • Windows 7 x64 RTM USB Port Has Power But Won't Recognize Mouse/Keyboard/Anything

    - by ben
    I have an odd error that doesn't seem to fit in with any of the other odd Windows 7 x64 USB errors that have been kicked up on Google. Here we go: Uninstalled Tortoise SVN and clicked restart computer. My machine had been up for around 28 days On reboot my mouse and keyboard failed to work anymore, couldn't log in. Tried every USB port I have on my Dell 390 and the ports on my Dell 19's, nothing worked. They had power but Windows would not respond when I manipulated the keyboard/mouse. Rebooted my computer and pressed F2 to get into bios, my keyboard is working fine in bios. Keyboard and mouse work fine on other computers when using USB. Found adapters for keyboard and mouse to convert from USB to PS/2 ports, works fine. I'm actually typing this question on the same keyboard, same computer, just using PS/2 ports for my mouse and keyboard. It appears to be a Windows 7 x64 issue. Other things I have tried: Multiple other mice and keyboards, iphone, all with no luck. Each one gets power, but Windows never tries to install drivers or sees that they are connected. Uninstall and reinstall all USB drivers. Drives uninstall and reinstall fine and report no errors in Control Panel. In Power Management I disallow Windows from turning off USB ports to save power Installed the latest nVidia drivers for my graphics card, no change. Anyplace else I can look/try? Thanks!

    Read the article

  • Problems getting Cron to run processes tagged @reboot for LDAP users

    - by Ben Torell
    I have a lab of computers running Ubuntu 9.10. Most of the people who log on to these computers are users from an LDAP server, and not local users. We discovered that if an LDAP user has a crontab with an entry marked to be run @reboot, the command will not actually run upon the reboot of a machine. I'm pretty sure that this is because the cron daemon starts before networking is fully up, so the crontabs of any LDAP users aren't loaded and run or checked for @reboot. In fact, cron will ignore LDAP users' crontabs entirely after a reboot until that user runs crontab -e again and saves, or until the cron daemon is rebooted. We were able to fix one part of this problem by adding the following line to /etc/crontab: @reboot root /bin/sleep 45 && /etc/init.d/cron restart Thus, when cron starts back up upon a reboot, it waits for networking to get up, then restarts the cron daemon. That fixes the problem of crontabs not being read at all for LDAP users. However, since it's the cron daemon being restarted and not the computer, @reboot entries are ignored. Is there a way for a user to make a command run upon restarting the daemon, rather than a reboot? Or is there a better solution to this overall problem? Thanks.

    Read the article

  • Can a Double-Density Floppy drive be swapped out for a High-Density drive?

    - by Ben
    I'm trying to fix the floppy drive in my Roland MC-500. The floppy drive inside is a Matsushita DDF3-1 which doesn't seem to be in production anymore. The DDF3-2 is still available, however it is an HD drive. Does anyone know off the top of their heads if there is any issue with simply swapping the two drives? Could the power usage be different on the drives? Since the machine is expecting a 720k DD drive, are there any jumper settings to have the HD drive function as a DD drive?

    Read the article

  • Recommendations for VMWare web server environment with load balancer.

    - by Ben
    We run IIS websites on a VMWare production server that pull image content and video content from a separate IIS instance on another server (media server). The media calls (images and video) are straight http:// calls and not using a streaming application. During peak traffic periods, we clone the production server five times and have a load balancer distribute traffic to all five production servers. The media server does not get ramped up. We noticed that the processing and resources on the media server gets very taxed during this period. Would it make sense to run the IIS instance for the media server locally on the production server and have it cloned with the production servers, then have a rule on the load balancer negotiating these media calls from the website? Would it be better to allocate more resources (memory and CPUs) to the media server VM and not clone it with the production servers? Recommendations are sincerely appreciated.

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • Perform shell operation through secure shell

    - by Ben
    Is it possible to perform a shell operation from a bash script through a secure shell. Here is an example of why you may want to do this. Lets say you have a simple unix operating system that you need only build and run on, but you want to do all of the development on another machine. I want to write a bash script that has the following functionality: scp file to location on other machine ssh to other machine cd into correct directory make run program scp results to file on original computer exit ssh Is this remotely possible? (Pardon the Pun :p)

    Read the article

  • Visible Keylogger (ie not evil)

    - by Ben Haley
    I want keylogging software on my laptop for lifelogging purposes. But the software I can find is targeted towards stealth activity. Can anyone recommend a keylogging software targeted towards personal backup. Ideal Functionality Runs publicly (like in the task bar). Easy to turn off (via keyboard shortcut is best... at least via button click) Encrypted log Fast Free Cross platform ( windows at least ) The best I have found is pykeylogger which does not attempt to be stealthy, but does not attempt to be visible either. I want a keylogger focused on transparency, speed, and security so I can safely record myself. *note: Christian has a similar question with a different emphasis

    Read the article

  • Why does Microsoft Windows' performance appear to degrade over time?

    - by Ben Aston
    Windows XP/2k3 and earlier (can't attest to Vista, but suspect it's the same) all appear to become more sluggish over time as applications are installed and uninstalled. This is not a scientifically tested observation, but more of a learned-through-experience piece of wisdom. (I've always suspected the registry as being behind the issue.) Does anyone have any concrete evidence of this degradation occurring, or it just an invalid perception of mine?

    Read the article

  • Customize specific websites on chrome's new tab page

    - by Ben
    Chrome's new tab offers the 8 websites you like to visit. Unfortunately for one of the sites, 4chan.org, the main page is a directory, not somewhere I want to go. I would like the 4chan.org thumb to instead open to 4chan.org/u/, taking me directly there instead of the main directory where I would have to manually find /u/. Anyone know how to make this happen? Thanks. Edit: I would like this to happen without totally destroying the new tab's default functionality.

    Read the article

  • Apache stopping downloads part way through

    - by Ben Smiley
    On my site there are some digital files which can be downloaded through a PHP script. The script works fine for small files but large files i.e. 115MB cannot be downloaded successfully. The connection dies after around 15 minutes but it's not consistent - sometimes longer sometimes shorter. I don't think it's a problem with the script timing out because the download time isn't consistent. Equally it doesn't seem like a memory limit problem because the amount downloaded varies each time. Does anyone know of any Apache or PHP related settings which could cause this kind of problem?

    Read the article

  • Server setup scripts, patches and migrations

    - by Ben Swinburne
    I have written some scripts which I use to configure various servers in a uniform way. Each time I deploy a server I run the relevant scripts so that I know they're all configured the same. I then have some patch scripts, which are changes to the originals which I can then run to ensure that modifications to the original set up can be run on each server. E.g. disable.sh - Disable SELinux etc to ensure other scripts all run correctly general.sh - Jailkit, AV, Repos, RKHunter, security tweaks, uninstall unused bits etc web.sh - Installs and configures Apache2 001_update_nr_licence_key.sh - Update a licence key for a piece of software which has changed since its install in general.sh I can run the first 3 without a problem, but when it comes to running patches I am a bit stuck. Is there a sensible way of doing these with some software? My current thought is write to a log file the role of the server be it web or db for example and then note the name of the patch which has run. It could then iterate through a folder to find all patches for that role which it has not yet run and execute them. This seems a bit long winded however. Could someone advise me as to the best way I can keep my servers uniform?

    Read the article

  • SpeedTracer NETWORK_RESOURCE_RESPONSE vs NETWORK_RESOURCE_FINISH

    - by Ben Flynn
    I'm using SpeedTracer with GoogleChrome to measure the load times of requested resources. The SpeedTracer site says: NETWORK_RESOURCE_RESPONSE "Indicates that the renderer has started receiving bits from the resource loader" NETWORK_RESOURCE_FINISH "Indictes a resource load is successful and complete." In my mind that means we would always see a network resource response (bytes are arriving) before we see a finish (all bytes received). This doesn't seem to be the case at all. Here is a sample: Request Timing @33519ms for 926ms Response Timing @34445ms for -847ms Total Timing @33519ms for 78ms I'm guessing response time isn't supposed to be negative. Can someone explain this or is this a bug? I'm using Chrome 10.0.612.3 dev with a SpeedTracer I downloaded today.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • WAMP server won't run with PHP 5.3.4 but will with PHP 5.2.11

    - by Ben Williams
    I have a 64bit Windows 7 Professional machine. I'm running WampServer Version 2.1 with Apache 2.2.4. It was installed on a clean machine. I'm using the default ini/conf files as they come. Wamp is installed in C:\wamp\, with php5.2 at C:\wamp\bin\php\php5.2.11 and php5.3 at C:\wamp\bin\php\php5.3.4. Both folders have the same permissions. When I run WAMP with 5.2.11 picked, it starts fine. When I run it with 5.3.4 picked, there are no errors in the Apache or PHP error logs, but I get The Apache service named reported the following error: httpd.exe: Syntax error on line 115 of C:/wamp/bin/apache/apache2.2.4/conf/httpd.conf: Cannot load C:/wamp/bin/php/php5.3.4/php5apache2_2.dll into server: The Apache service named is not a valid Win32 application. in my system application error logs. 5.2.11 calls C:/wamp/bin/php/php5.2.11/php5apache2_2.dll and that doesn't throw an error. What am I doing wrong?

    Read the article

  • mystery Internet traffic to port 445

    - by Ben Collver
    Recently, I noticed traffic from the office network to TCP port 445 on the Internet [a]. Below are the Linux firewall log entries to Facebook's network [b] and Google's network [c]. I would like to identify the source of this traffic. My first guess is that Facebook and Google might be using multiple TCP ports for SSL load balancing. However, I could not confirm this based on the web proxy logs. What else might it be? [a] http://support.microsoft.com/kb/204279 [b] Sep 4 08:30:03 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.131 DST=69.171.237.34 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=14287 DF PROTO=TCP SPT=51711 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0 [c] Aug 28 06:02:41 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.115 DST=173.194.33.47 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=4558 DF PROTO=TCP SPT=49294 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >