Search Results

Search found 15935 results on 638 pages for 'background color'.

Page 492/638 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by user36428
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • EXCEL generate a character in one column of a row, only if other columns in the same row are not bl

    - by Simon
    My speadsheet keeps track of when patients leave a specific floor of the hospital. There are columns in which where each patient goes is documented (1 column for "home", another column for "rehab facility", another for "other floor", etc.). Only when the patient leaves the hospital altogether does it count as a discharge, in which case the “discharge” column needs to have something in it. What formula can I use to generate, say, an "x" in the "discharge" column if certain "where they went" columns in the same row contain something, but not if there is nothing in any of them? Currently, to accomplish this I am using =IF(OR(M30,N30,O30,P30,Q30,R30,S30),"x") in row 3 of the "discharge" column (rows 1 and 2 are headings), and I have used "fill down" in all subsequent rows. To suppress the "FALSE" this formula yields when the condition is not true, I have applied conditonal formatting to the entire column that if the value=FALSE then the font is white (same colour as background). Is there a more efficient, elegant and idiot-proof way of doing this? The conditional formatting of text colour could potentially confuse everyone but the person who built the spreadsheet (me).

    Read the article

  • Advanced Web Browsing: hitting next with keyboard (single keystroke)

    - by Dan
    I have a website for class that is literally 1000's of pages long, with a next button at the bottom like this: Prev 1 2 3 4 Next With the following code, if that helps: <a href="javascript:gotoModuleObjective(1,1,34,17, 1, 0);">Prev</a> </td> <td align="center" width="10"> <font color="CC0000">1</font> </td> <td align="center" width="10"> <a href="javascript:gotoModuleObjective(1,24,33,2,1,0);">2</a> </td> <td align="center" width="10"> <a href="javascript:gotoModuleObjective(1,24,33,3,1,0);">3</a> </td> <td align="center" width="10"> <a href="javascript:gotoModuleObjective(1,24,33,4,1,0);">4</a> </td> <td align="right" style="white-space:nowrap;" width="30"> <a href="javascript:gotoModuleObjective(1, 24,33,2, 1, 0);">Next</a> </td> The numbers change depending on where you are. I would like to be able to, from the keyboard, hit the next button. I am using windows, but can switch to any browser, if this is only possible is a given browser. If this can be done with just one keystroke that would be wonderful. Like just hitting the forward arrow would automatically bring click the button called next and bring me to the next page. Is this possible? Thanks, Dan

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Security and data backup for Ubuntu usb installation

    - by AMS949
    Due to encryption on my corporate laptop I opted to install Ubuntu 9.10 on a flash drive and just use it as my hard drive. I tried the vmware but it crashed my xp a couple of times. Now I have a couple of concerns since I am totally new to Ubuntu and Linux. First, would it be possible for me to transfer my installation to a new usb drive? I now have a 4gb and it may get filled up soon, I don't seem to be able to see my actual files when I browse the usb drive. I also tried copying all files from this usb to another and boot from it but that failed. Second, whenever the system boots up I am never prompted for a password, it is always the username ubuntu. Which I guess means if I lose my usb drive my data is open wide. Is there a way to secure it or to use users and groups as on a regular hard drive installation? As a background, I created this by going into a working Ubuntu installation, System - Administration - USB Starter Disk Creator (was that the right way to start with?) Thanks!

    Read the article

  • Memory upgrade to 8GB on unibody MacBook

    - by dlamblin
    I bought the 13" unibody MacBook one month prior to it's upgrade to become the new improved 13" unibody MacBook Pro (with unopenable battery compartment, extra 3 hours of battery life, more color gammut, SD slot, firewire 800 port, and a new 8Gb memory limit). My MacBook says it is limited to 4GB of DDR3 1066mhz memory, in 2 SO-DIMMs. But that was written back when you couldn't get 2 4GB SO-DIMMs. Now that you can get them, the very similar MacBook Pro is shipping with 4GB standard, and can be upgraded to 8GB. I've asked (Apple reps), and I'm repeatedly told that my model cannot be upgraded to 8. When I ask for the reason they alway say: "Because that's the published limit at the time your mac was built." I find this unconvincing. If they said: "Because the memory controller in the chipset is limited to 4GB, despite being seemingly identical to the memory controller in the same chipset newest MacBook model," then I'd just take their word for it. Has anyone tried it, or found any research as to whether two 4GB DDR3 1066mhz SO-DIMMs can be installed in the unibody MacBook without FireWire?

    Read the article

  • Live Mesh starts exactly once on fresh Win7 Ultimate installation

    - by Reb.Cabin
    I did a fresh install of Windows 7 Ulimate 64-bit on a formatted drive on a refurbed Lenovo PC, applied all 102 (!) windows updates, windows seems to be working fine. No quirks installing, no apps, no junkware, just straight, legal, Win7 Ultimate right from an unopened 2009 Microsoft box. Ok, breathe sigh -- Install Live Mesh (no messenger, no mail, no writer, no photo, none of the rest of the Windows Live freeware). Set up my shares, let it run overnight. watch MOE.exe in the Task-Manager perf pane to make sure it's all settled down. reboot. Ok, check that MOE is running and files are getting updated properly from other machines in the mesh. Great. HOWEVER -- when I try to launch Windows Live Mesh app from the Start jewel, I get a brief hourglass, then nothing. Reboot. Same story. result -- the shares I already posted seem to be synching properly, but I can't run the app, so I can't add and delete shares. The background process MOE seems to run, but I can't get the app going. btw, the reason I did this fresh install is I had exactly the same experience running Vista, so I wiped the machine hoping it would solve this nasty problem. Imagine my surprise! Will be grateful for clues, advice, etc, please & thanks!

    Read the article

  • Multiple PHP SAPI configuration

    - by DTest
    I'm trying to build PHP for use as an apache shared module --with-apxs2 but also with the 'php-cgi' binary (fastcgi) on Mac OSX 10.6. I'm using this ./configure : /configure --prefix=/usr/local/PHP \ --with-apxs2=/usr/local/apache/bin/apxs \ --disable-ipv6 \ --enable-cgi \ --with-curl \ --with-mysqli=/usr/local/mysql/bin/mysql_config \ --with-openssl=/usr \ --enable-ftp \ --enable-shared \ --enable-soap \ --enable-sockets \ --enable-zip \ --with-zlib-dir It builds the apache php5.so module just fine, but in /usr/local/PHP/bin, there is no php-cgi file. If I build it without the --with-apxs2 option (and indeed, I don't even need the --enable-cgi option) the php-cgi file gets built with no problems. Background on my setup: PHP 5.3.4, Apache 2.2.14, Mac OSX 10.6, Tomcat with JavaBridge (which is why I need the php-cgi file) Without the apxs2 option, /usr/local/php/bin/php -v produces: PHP 5.3.4 (cli) (built: Dec 21 2010 21:35:14) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies and /usr/local/php/bin/php-cgi -v produces: PHP 5.3.4 (cgi-fcgi) (built: Dec 21 2010 21:35:12) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies My question is, what am I not understanding with php SAPIs that won't allow the building of the two modules at the same time? Also, can I build it --with-apxs2 the first time, then make clean and rebuild in the same PHP directory /usr/local/php for the php files without issue?

    Read the article

  • How to enable synergy 24800 (or some other port) through firewalld

    - by ndasusers
    After upgrading to Fedora 18, Synergy, the keyboard sharing system was blocked by default. The culprit was firewalld, which happily ignored my previous settings made in the Fedora GUI, backed by iptables. ~]$ ps aux | grep firewall root 3222 0.0 1.2 22364 12336 ? Ss 18:17 0:00 /usr/bin/python /usr/sbin/firewalld --nofork david 3783 0.0 0.0 4788 808 pts/0 S+ 20:08 0:00 grep --color=auto firewall ~]$ Ok, so how to get around this? I did sudo killall firealld for several weeks, but that got annoying every time I rebooted. It was time to look for some clues. There were several one liners, but they did not work for me. They kept spitting out the help text. For example: ~]$ sudo firewall-cmd --zone=internal --add --port=24800/tcp [sudo] password for auser: option --add not a unique prefix Also, posts that clamied this command worked also stated it was temporary, unable to survive a reboot. I ended up adding a file to the config directory to be loaded in on boot. Would anyone be able to have a look at that and see if I missed something? Though synergy works, when I run the list command, I get no result: ~]$ sudo firewall-cmd --zone=internal --list-services ipp-client mdns dhcpv6-client ssh samba-client ~]$ sudo firewall-cmd --zone=internal --list-ports ~]$

    Read the article

  • ghc6 install trouble: hGetContents: invalid argument (invalid UTF-8 byte sequence)

    - by olimay
    Having trouble installing ghc6 on Ubuntu Maverick via apt. Here's what seems to be the relevant error that comes up when I try to (apt-get|aptitude) install ghc6: A package failed to install. Trying to recover: Setting up ghc6 (6.12.1-13ubuntu1) ... ghc-pkg: /home/opm/.ghc/i386-linux-6.12.1/package.conf.d/unix-compat-0.2-edefa7bced91ebe610d455bab466e200.conf: hGetContents: invalid argument (invalid UTF-8 byte sequence) (Here's the full output, if you're interested: http://paste.ubuntu.com/566475/ ) This still happens after apt-get clean and apt-get update. My searching around has not really helped me understand what's going on, except that it might be caused by a mismatch in locale. So, here's the output of locale too: LANG=en_US.utf8 LANGUAGE=en_US:en LC_CTYPE="en_US.utf8" LC_NUMERIC="en_US.utf8" LC_TIME="en_US.utf8" LC_COLLATE="en_US.utf8" LC_MONETARY="en_US.utf8" LC_MESSAGES="en_US.utf8" LC_PAPER="en_US.utf8" LC_NAME="en_US.utf8" LC_ADDRESS="en_US.utf8" LC_TELEPHONE="en_US.utf8" LC_MEASUREMENT="en_US.utf8" LC_IDENTIFICATION="en_US.utf8" LC_ALL= Any ideas? Additional background: this all seems very strange to me, because I used to have ghc6 installed correctly--I use XMonad as my main window manager most of the time. I tried to install haskell-platform (through apt), which failed and told me that there was something wrong with ghc6, and so I reinstalled ghc6 and began to get the above error message.

    Read the article

  • Debian Squeeze Linux 9p virtfs guest mount failure

    - by Tero Kilkanen
    First some background information on the server: Host OS: Debian Linux Squeeze + qemu-kvm version 1.0+dfsg-8~bpo60+1 Guest OS: Debian Linux Squeeze I use qemu-kvm via libvirt. I have set up 9p VirtFS with the following in Guest's XML config: <filesystem type='mount' accessmode='passthrough'> <source dir='/srv/www'/> <target dir='wwwdata'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </filesystem> That is, I want to share /srv/www to the guest OS using mount tag wwwdata. When I try to mount the VirtFS share from the guest, I get an error message: root@server:~# mount -t 9p -o trans=virtio,version=9p2000.L2 wwwdata /srv/www/ mount: wwwdata: can't read superblock I also tried virtfs target dir/mount_tag www at first. I got the same error message. However, I was able to mount the VirtFS share using mount tag www1111, or www1 or similar. Some more notes on this one. dmesg doesn't show anything useful either in guest or the host. The only sign is this entry in the guest dmesg: [ 36.054936] Installing v9fs 9p2000 file system support Does anyone know how to get this working correctly? Google gives no useful information on this issue; I've tried several searches.

    Read the article

  • Software to clean up photos of whiteboards and documents?

    - by Norman Ramsey
    I take a lot of photos of whiteboards, blackboards, and so on for teaching purposes (examples online through May 2010). I'm interested in cleaning them up for archival purposes, preferably using Linux. Commercial products ClearBoard and PhotoNote are priced a little aggressively for my purposes, plus my students would like to have this capability too. Does anyone know of any good, open source software for Converting photographs to images with just a few colors? Eliminating perspective distortion? Removing unwanted junk from around the edges of an image? or anything like that? I'm imagining that I start out with a picture of my whiteboard using red and black markers, and I end up with a three-color image using just white, red, and black. Or I photograph a laser-printed document and end up with a clean black-and-white image. I have tried standard tools that reduce the number of colors in an image, and they do a terrible job—probably because they are trying to reproduce the uneven illumination of the original image. Command-line Linux tools would be ideal.

    Read the article

  • How Do I Secure WordPress Blogs Against Elemento_pcx Exploit?

    - by Volomike
    I have a client who has several WordPress 2.9.2 blogs that he hosts. They are getting a deface kind of hack with the Elemento_pcx exploit somehow. It drops these files in the root folder of the blog: -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.php -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.asp -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.aspx -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.html -rwxr-xr-x 1 userx userx 1459 Apr 16 04:25 index.php* It overwrites index.php. A keyword inside each file is "Elemento_pcx". It shows a white fist with a black background and the phrase "HACKED" in bold letters above it. We cannot determine how it gets in to do what it does. The wp-admin password isn't hard, but it's also not very easy either. I'll change it up a little to show you what the password sort of looks like: wviking10. Do you think it's using an engine to crack the password? If so, how come our server logs aren't flooded with wp-admin requests as it runs down a random password list? The wp-content folder has no changes inside it, but is run as chmod 777 because wp-cache required it. Also, the wp-content/cache folder is run as chmod 777 too.

    Read the article

  • Slow Citrix connection related to mapped network drives

    - by George
    I have this weird issue with Citrix being slow and maybe users just being a little dramatic, but I am curious as to why that happens. Let me give you a little bit of a background. Citrix is running off of Windows 2003 server, TSprofiles and file server were located on the same server, until recently. We have moved our file server over to a new server with tons of space. We have Citrix on one server, TSprofiles on another and file server on third. We are using logon scripts to map home drives, shared drive and etc. Now, up until we made the file server move, the logon process took several seconds and most users couldn't even notice logon script being executed as they logon. Now, it takes upwards of several minutes and users can see logon script being executed at a slow pace, one line at a time. The only new variable in this whole scenario is the new file server. All the servers are physically located in the same location and on the same subnet. So, I guess my question is, if anyone can explain why a sudden sluggishness? And any tools I can use to troubleshoot the issue? Thanks!

    Read the article

  • What is the latest on Microsoft Expression Studio licensing?

    - by DanM
    In the past, there's been an issue with Microsoft not allowing you to deactivate an Expression Studio key. Basically, you get two keys per license. If you assign both keys (say one to a desktop and one to a laptop), then you upgrade to a new machine (say you replace your laptop or upgrade some of the hardware), you have to buy a new copy of Expression Studio ($600 for Ultimate). This seems ludicrous to me, and I'm wondering if anyone knows if this policy is still in place. I can't seem to find a EULA online anywhere, so I don't know where to find this information. I know my laptop is due for replacement soon, and I want to know if I'm going to have to sink $600 into a software product I already purchased. For background, please refer to this thread on the Microsoft Expression forums: http://social.expression.microsoft.com/Forums/en-US/general/thread/da5587bc-b098-4c6a-9a56-af3608d940d0 Note that this thread is locked. Microsoft doesn't seem to want people to discuss this. This is one reason I'm posting here rather than on that site.

    Read the article

  • EEE PC dropbox server running 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have Samsung SCX 4521F (fax/print/scan/copy), Samsung ML2010, HP Laser jet P1006, HP Color Laserjet CP1215, HP Office jet pro K8600, HP Design jet 500. All of them now are connected using little print servers and I want to get rid of them hooking everything to this mini server. The fax/print/scan/copy machine need to stay connected to a PC to make me able to use the software that comes with It. The mini server would save me on this too. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: If You have something similar running for a long time If You disagree with this hardware choice and If You would suggest some other device. If You see any issues with my printing setup Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • User Receiving Partial Downloads

    - by JGB146
    Some background: I run a subscription-based poker strategy video website. Our videos range in length from 30 mins up to 80 mins, and in size from 20MB up to 500MB. The site is on a shared server with hostmonster.com One of my users is having problems downloading some of the larger video file. He reports problems with anything over 100MB. Basically, he's only getting part of the file, which means that the video stops before the end. He has tried multiple computers from multiple locations. He reports that he is able to successfully download 500MB files from other sites. He is using Internet Explorer (version unknown) as his browser. I have suggested that he try Firefox or Chrome to see if their download managers work any better for him, but as of yet I have not heard anything back. He has also reported that his downloads do not report any filesize. I see the same thing (no file size reported), but I have not experienced any problems with the downloads themselves. We pass the downloads through a php script which verifies login information and records the download to our database before returning the file. I suspect this is why there is no filesize reported. What else should I ask the user? What other things could he or I try?

    Read the article

  • How to make Exchange 2003 non-authoritive

    - by Romski
    Background We are a small company with an internally hosted Exchange 2003. It receives email for 2 domains (the company was renamed a few years back). For the sake of argument, the domains are: oldname.com newname.com We have moved newname.com to a hosted exchange service, and our DNS record is correctly routing emails. Our internal server still receives email for oldname.com, although we have asked our hosting company to accept emails for that domain. Problem My problem is that emails generated internally from monitoring software, printer, etc. are being caught by our (defunct) internal server and being delivered to the old mailboxes. I believe that what is happening is that our internal exchange server considers itself to be the authoritive server for newname.com. I think it must be looking in active directory for a mailbox and delivering it internally without ever going outside. Attempt to fix I started to follow the article here: http://support.microsoft.com/kb/321721. I removed the SMTP recipient policy for newname.com, and added a dummy address and made it primary. I also answered yes for updating the associated emails. I then restarted the Microsoft Exchange Routing System and SMTP, but emails are still being routed internally. Is there a way to force the exchange server to route all emails for the domain newname.com to the new hosted service?

    Read the article

  • Need help diagnosing my machine

    - by Tom Collins
    I have something that just slows my computer to a crawl sometimes. Not running anything big. Yesterday all I had running (besides background apps) were Firefox & Windows Explorer and could barely even switch screens. Nothing showing up in the task manager as hogging CPUs. I have all non-essential services stopped (MySQl & MSSQL) unless I need them. I made some restore points not long ago, but they disappeared. This is a development mach with a LOT of apps installed, so I really, really do not want to re-install Windows. So, what I'm looking for are ideas or tools I can use to help diagnose this problem. The only clues I have is this started right after I installed Office 2013 (with Office 2010 still installed as well) installed Visual Studio 2012 (also keeping 2010 as a co-install) and installed MSSQL 2012 (upgrade from 2008, no co-install) Also, computer runs fine in Safe Mode. I've just ran out of ideas of what to check. Any help / suggestions would much appreciated. Thanks P.S. I'm running Win 7 Pro (x64). Office is also 64 bit. Visual Studio & MSSQL are 64 bit if that option was available (not sure).

    Read the article

  • Utility to store/cache all web pages and YouTube videos

    - by jonathanconway
    I found myself in the following situation. I'm travelling abroad with my laptop. I connect to a WiFi point and do a bit of browsing and play a YouTube video or two. Then I disconnect and hop on either a plane or taxi. Now I want to go back to some of the webpages I was browsing before and continue reading them, or watch some more of that YouTube video. Unfortunately it seems like none of these resources are cached, or if they are, I have no idea how to access them. Here's what I'd like: A utility that starts when my computer boots and sits in the background, silently caching all the web pages that I view. Not only that, but also the resources such as YouTube videos. Later, when I re-navigate to a site while disconnected, the browser automatically pulls the pages from my cache rather than giving me a 404 error. Or I can click an icon in the system tray and see a list of all the pages/videos in the cache and view any that I like. I'm sure Internet Explorer had a feature like this at some point, like "Offline Mode" or something. But these days it doesn't seem to work. Even when I select that option I still can't view pages that I'm certain I downloaded before. So has the utility I'm talking about been developed yet?

    Read the article

  • Windows 7 hangs while loading desktop

    - by Joshua
    I am facing a weird problem. My computer hangs while loading the desktop, and only the background shows up; no icons or bars load. If I power the system on and off about 4-6 times, I may be able to use it normally. The desktop loads normally after rebooting or in safe mode; this only occurs when I start the system normally. I've tried several things to fix it, such as removing all start-up items, but it still doesn't solve the problem. What should I do? I found three major errors in Event Viewer: Source: Microsoft-Windows-DistributedCOM Level: Error DCOM got error "1084" attempting to start the service WSearch with arguments "" in order to run the server: {7D096C5F-AC08-4F1F-BEB7-5C22C517CE39} Source: Microsoft-Windows-DistributedCOM Level: Error DCOM got error "1084" attempting to start the service WSearch with arguments "" in order to run the server: {7D096C5F-AC08-4F1F-BEB7-5C22C517CE39} Source: Service Control Manager Level: Error The Network List Service service depends on the Network Location Awareness service which failed to start because of the following error: The dependency service or group failed to start.

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Error installing new rails version. Failed to build gem native extension.

    - by davidcmolina
    I am trying to build my first ruby on rails app using the following guide (http://ruby.railstutorial.org/chapters/a-demo-app#code-demo_gemfile_sqlite_version_redux) and have run into a few obstacles. The first, receiving errors when upgrading to the latest rails version 3.2.8. bash-3.2$ gem install rails Building native extensions. This could take a while... ERROR: Error installing rails: ERROR: Failed to build gem native extension. /Users/davidmolina/.rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb creating Makefile make compiling generator.c make: /usr/bin/gcc-4.2: No such file or directory make: *** [generator.o] Error 1 Gem files will remain installed in /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5 for inspection. Results logged to /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5/ext/json/ext/generator/gem_make.out Even when trying to install from rails app: $ gem install rails Building native extensions. This could take a while... ERROR: Error installing rails: ERROR: Failed to build gem native extension. /Users/davidmolina/.rvm/rubies/ruby-1.9.3-p194/bin/ruby extconf.rb creating Makefile make compiling generator.c make: /usr/bin/gcc-4.2: No such file or directory make: *** [generator.o] Error 1 Gem files will remain installed in /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5 for inspection. Results logged to /Users/davidmolina/.rvm/gems/ruby-1.9.3-p194/gems/json-1.7.5/ext/json/ext/generator/gem_make.out When trying to Bundle Install: $ bundle install Could not locate Gemfile Background details: Mac OS X Version 10.8.2 Ruby 1.9.3 Rails 2.3.4 I'm wondering if there is a direct one-liner or gem that is missing?

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >