Search Results

Search found 9751 results on 391 pages for 'merge module'.

Page 191/391 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Address Label Printing without Windows Address Book

    - by Jim Fell
    In the past I've maintained my address book using the built-in Address Book utility that came with Windows. Once each year, I would import my Address Book (.WAB) file into Outlook 2003/XP. (I don't use Outlook for email.) Then I would use the Mail Merge feature in Word 2003/XP to make and print address labels on standard Avery label sheets to simplify address of my Christmas cards. Since I'm now using Windows 7, and the familiar Address Book utility is no longer available, how can I print my address labels? I have both Windows Address Book (.WAB) and Comma-Separated Values (.CSV) files that contain my address book data. So, I guess I need to know two things: Which program or utility (preferably free) can I use to print my address labels. How do I import my address data into that program? If it helps, I am already a user of Gmail and Google Drive. Thanks.

    Read the article

  • How to recover bitlocker encrypted partition that is now 'unallocated'/'free space'?

    - by Atishay Jain
    My hard drive had 5 partitions(including 1(some 4-5GB) bit locker encrypted one). When I used disk mgmt I could view 2 partitions(24.4GB and 8.94GB) in green colour labeled Empty space. So, I wanted to merge them and I used minitool partition wizard for the purpose. I don't know, what that software did, but all I was left with 2 partitions and lots of green free space. I recovered 2 partitions using EaseUS partition master, but the bitlocker encrypted partition cannot be searched by it(and also minitool partition recovery). Now, the disk mgmt shows 2 free space partitions of 28.36GB and 8.94GB respectively. Here is a screenshot http://s14.postimage.org/4tvij041t/Screen_Shot003.jpg Please, tell me a way to recover the bitlocker encrypted partition that is showing as a free space in disk management. P.S. - It contains very important data.

    Read the article

  • Error 1053 in starting the IMAP4.exe and POP3.exe on Exchange Server 2007

    - by Albert Widjaja
    Hi Everyone, I need your help in resolving this issue, when I tried to start this service it always failed with the following error message logged in the Event Viewer ? Quote: Event Type: Error Event Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 1000 Date: 1/4/2011 Time: 11:04:51 AM User: N/A Computer: ExcServ01 Description: Faulting application microsoft.exchange.imap4service.exe, version 8.1.263.0, stamp 47a985c6, faulting module kernel32.dll, version 5.2.3790.4480, stamp 49c51cdd, debug? 0, fault address 0x000000000000dd50. Any kind of information would be greatly appreciated. Thanks

    Read the article

  • Drupal Sites Backup and Restore to Amazon S3

    - by Ngu Soon Hui
    There are modules written for database backup and files backup, but what I want is a complete backup to Amazon S3 or other cloud platforms, for both the data, and the sites. Currently as it stands, I have to separately and manually backup the two. Is there any module/tool/already-written-script that allows me to do that?

    Read the article

  • Strict security and virtual host isolation with Nginx?

    - by Hach-Que
    I currently have an Apache web server set up under which each virtual host is isolated using HTTPD-ITK and the AppArmor module. Each virtual host's workers are setuid/setgid by the server and are then placed in an AppArmor profile. I'm looking to use Nginx but I can't find any documentation on setting it up so that rather than the worker processes being shared between all virtual hosts, worker processes are per virtual host (and thus can be setuid / setgid). Is there any way to do this under Nginx?

    Read the article

  • Merging partitions and removing windows of one partition

    - by SmartLemon
    I have two partitions on my laptop, I created a new one when installing windows 8 pro because windows 7 wouldn't upgrade to it for some odd reason. The main partition, which has 631 GB ( has windows 7 installed on it, and the second partition is 49.9 GB and has windows 8 installed on it. What I need to do is remove windows 7 from the other one (Yeah its dual booting), make it so it boots straight into windows 8, without showing the dual boot screen, and also merge the two drives together. Only problem is, I have no idea how to do this. Please don't use complete lamens terms, I am a software developer so I know at least a bit about computers. Here's disk management so you can see how its set out.

    Read the article

  • Trailing dots in url result in empty 404 page on IIS

    - by Peter Hahndorf
    I have an ASP.NET site on IIS8, but IIS7.5 behaves exactly the same. When I enter a URL like: mysite.com/foo/bar.. I get the following error with a '500 Internal Server Error' status code: even though I have custom error pages set up for 500 and 404 and I don't see anything wrong with my custom error page. In my web.config system.web node I have the following: <customErrors mode="On"> <error statusCode="404" redirect="/404.aspx" /> </customErrors> If I remove that section, I get a 404.0 response back but the page itself is blank. In web.config system.webServer I have: <httpErrors errorMode="DetailedLocalOnly"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="404.html" responseMode="File" /> </httpErrors> But whether that is there or not, I get the same blank 404.0 page rather than my expected custom error page, or at least an internal IIS message. So first of all why is the asp.net handler picking up a request for '..' (also works with one or more trailing dots) If I remove the following handler from applicacationHost.config: <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" responseBufferLimit="0" /> I get my expected custom 404 page, but of course removing that handler breaks routing in asp.net among other things. Looking at the failure trace I see: Windows Authentication is disabled for the site, so why is that module even in the request pipeline? For now my fix is to use the URL Rewrite module with the following rule: <rewrite> <rules> <rule name="Trailing Dots" stopProcessing="true"> <match url="\.+$" /> <action type="Rewrite" url="/404.html" appendQueryString="false" /> </rule> </rules> </rewrite> This works okay, but I wonder why IIS/ASP.NET behaves this way?

    Read the article

  • Software for mosaicing video frames into a panorama

    - by Eikern
    I have some video footage I've shot using a dolly with the camera rotated 90 degrees to the right. Which gives me a sideways tracking shot of a background. Does there exist some kind of software I can create a single image from the video footage? The result I want is one single image of the entire shot. I guess I could export every Nth frame and use Photoshop (or any other type of panorama software) to merge the images together, but this would make it easier. Thanks.

    Read the article

  • Utility for scanning stacks of double-sided documents

    - by Peter Becich
    I have a simplex scanner with document feeder, and am looking for the best way to scan double-sided notes. It would be useful to be able to scan the same stack twice, once flipped, and have a utility automatically interleave the scanned images. Multi-page PDF export would also be nice. Is there a tool to do this? Otherwise, I'm considering writing it in Python, with the imagescanner module, if it can use the ADF -- http://pypi.python.org/pypi/imagescanner/0.9 Thanks

    Read the article

  • Apache Simple Configuration Issue: Setting up per-user directory permission denied problem

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work? [Added after comments] Hi Andol, thank you for your suggestions. So, first off, you said that you assume that I have the userdir module enabled, but I am not sure what that means exactly. That could be part of the problem. I do have the Module loaded, using the LoadModule directive. I have this: LoadModule userdir_module modules/mod_userdir.so I also did a find on where mod_userdir is, and I found it located here: [huckphin@crhyner-workbox]/% find / . -name '*mod_userdir.so*' 2> /dev/null /usr/lib64/lighttpd/mod_userdir.so /usr/lib64/httpd/modules/mod_userdir.so Is there something else I need to enable? Also, my directory configuration was mentioned. I have uncommented the default configuration given. I have not looked into what all of the configurations mean, and this could probably explain the issue. Here is the Directory that I have for the user directories: <Directory "/home/*/public_html"> AllowOverride FileInfo AuthConfig Limit Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory>

    Read the article

  • Nginx Retry of Requests ( Nginx - Haproxy Combination )

    - by vaibhav
    I wanted to ask about Nginx Retry of Requests. I have a Nginx running at the backend which then sends the requests to HaProxy which then passes it on the web server and the request is processed. I am reloading my Haproxy config dynamically to provide elasticity. The problem is that the requests are dropped when I reload Haproxy. So I wanted to have a solution where I can just retry that from Nginx. I looked through the proxy_connect_timeout, proxy_next_upstream in http module and max_fails and fail_timeout in server module. I initially only had 1 server in the upstream connections so I just that up twice now and less requests are getting dropped ( only when ) have say the same server twice in upstream , if I have same server 3-4 times drops increase ). So , firstly I wanted to now , that when a request is not able to establish connection from Nginx to Haproxy so while reloading it seems that conneciton is seen as error and straightway the request is dropped . So how can I either specify the time after the failure I want to retry the request from Nginx to upstream or the time before which Nginx treats it as failed request. ( I have tried increaing proxy_connect_timeout - didn't help , mail_retires , fail_timeout and also putting the same upstream server twice ( that gave the best results so far ) Nginx Conf File upstream gae_sleep { server 128.111.55.219:10000; } server { listen 8080; server_name 128.111.55.219; root /var/apps/sleep/app; # Uncomment these lines to enable logging, and comment out the following two #access_log /var/log/nginx/sleep.access.log upstream; error_log /var/log/nginx/sleep.error.log; access_log off; #error_log /dev/null crit; rewrite_log off; error_page 404 = /404.html; set $cache_dir /var/apps/sleep/cache; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://gae_sleep; client_max_body_size 2G; proxy_connect_timeout 30; client_body_timeout 30; proxy_read_timeout 30; } location /404.html { root /var/apps/sleep; } location /reserved-channel-appscale-path { proxy_buffering off; tcp_nodelay on; keepalive_timeout 55; proxy_pass http://128.111.55.219:5280/http-bind; } }

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • Mac OS X file recovery

    - by Daniel
    I thought that all operating systems would merge folder content when being moved to the same location. Imagine my surprise when that didn't happen and I have hundreds, if not thousands of files that have gone missing and are nowhere to be found. Because they were not "deleted" they are not in the trash bin. I've tried to do some recovery using a program called stellarPheonix but after about a 24hour scan, it didn't recognize any of the raw files (.dng,.arw) as image files and so I couldn't see if they could be recovered. It also didn't show the directory structure, which would be handy. I tried a quick scan, but all it showed was files that were still on the HD, not sure what the point of that is. I've used recover 2000 on Win and it does a good job, does anyone know of anything that works quickly and reliably for this kind of file recovery. (I don't think I should have to do a sector-by=sector for this kind of file loss)

    Read the article

  • How to run Django 1.3/1.4 on uWSGI on nginx on EC2 (Apache2 works)

    - by Tadeck
    I am posting a question on behalf of my administrator. Basically he wants to set up Django app (made on Django 1.3, but will be moving to Django 1.4, so it should not really matter which one of these two will work, I hope) on WSGI on nginx, installed on Amazon EC2. The app runs correctly when Django's development server is used (with ./manage.py runserver 0.0.0.0:8080 for example), also Apache works correctly. The only problem is with nginx and it looks there is something else wrong with nginx / WSGI or Django configuration. His description is as follows: Server has been configured according to many tutorials, but unfortunately Nginx and uWSGI still do not work with application. ProjectName.py: import os, sys, wsgi os.environ.setdefault("DJANGO_SETTINGS_MODULE", "ProjectName.settings") from django.core.wsgi import get_wsgi_application application = get_wsgi_application() I run uWSGI by comand: uwsgi -x /etc/uwsgi/apps-enabled/projectname.xml XML file: <uwsgi> <chdir>/home/projectname</chdir> <pythonpath>/usr/local/lib/python2.7</pythonpath> <socket>127.0.0.1:8001</socket> <daemonize>/var/log/uwsgi/proJectname.log</daemonize> <processes>1</processes> <uid>33</uid> <gid>33</gid> <enable-threads/> <master/> <vacuum/> <harakiri>120</harakiri> <max-requests>5000</max-requests> <vhost/> </uwsgi> In logs from uWSGI: *** no app loaded. going in full dynamic mode *** In logs from Nginx: XXX.com [pid: XXX|app: -1|req: -1/1] XXX.XXX.XXX.XXX () {48 vars in 989 bytes} [Date] GET / => generated 46 bytes in 77 m secs (HTTP/1.1 500) 2 headers in 63 bytes (0 switches on core 0) added /usr/lib/python2.7/ to pythonpath. Traceback (most recent call last): File "./ProjectName.py", line 26, in <module> from django.core.wsgi import get_wsgi_application ImportError: No module named wsgi unable to load app SCRIPT_NAME=XXX.com| Example tutorials that were used: http://projects.unbit.it/uwsgi/wiki/RunOnNginx https://docs.djangoproject.com/en/1.4/howto/deployment/wsgi/ Do you have any idea what has been done incorrectly, or what should be done to make Django work on uWSGI on nginx on EC2?

    Read the article

  • Unable to remove broken Exchange 2003 installation (SBS 2003)

    - by Austin ''Danger'' Powers
    We have a non-functional Exchange 2003 installation on our SBS 2003 server that I am trying to uninstall. So far we have never used, and will never use, Exchange on this server- all we need is to remove it from the system (as it is installed on a partition which we want to merge with the main data partition to increase network storage capacity). Attempting to remove it using Add/Remove Programs produces the following error: When doing a search in ADUC to see which users still have a mailbox associated with them, it seems to only be the domain administrator account: As the Exchange installation is broken, it is not possible to run either System Manager or Exchange 5.5 Administrator to make mailbox changes. How can I forcibly remove a mailbox (which does not need to be salvaged or backed up), to allow the uninstall of Exchange to proceed? Any ideas would be appreciated!

    Read the article

  • Xubuntu login hangs after Cancel Button click

    - by akester
    I'm running Xubuntu 12.04 (I installed using the alternative installer.) running in Virtaulbox 4.1.20 My issue is with the login screen (lightdm-gtk-greeter). It usually runs just fine, and allows users to log in and out but it will hang if the user presses the cancel button. The interface is still working (ie, shutdown menu is still available, I can switch to a different tty) but the username or password field (depending on when the button is hit) is disabled. Restarting lightdm will reset the screen, but the problem still exists. The issue is only with the cancel button. The login, session, and language buttons/menus as well as the accessibility and shutdown menu appear to work normally. I've modified some of the config files for lighdm-gtk-greeter, specifically /etc/lightdm/lighdm-gtk-greeter.conf to change the background image and /etc/lightdm/lightdm.conf to disable the user list. I did not check to see if the error existed before the changes took place. The changes have been restored the default settings but the problem persists. Here is the output of /var/log/lightdm/lightdm.log when the screen is hung: [+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=2072 [+0.00s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Adding default seat [+0.00s] DEBUG: Starting seat [+0.00s] DEBUG: Starting new display for greeter [+0.00s] DEBUG: Starting local X display [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.03s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.04s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.04s] DEBUG: Launching X Server [+0.05s] DEBUG: Launching process 2078: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.05s] DEBUG: Waiting for ready signal from X server :0 [+0.05s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.05s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+0.28s] DEBUG: Got signal 10 from process 2078 [+0.28s] DEBUG: Got signal from X server :0 [+0.28s] DEBUG: Connecting to XServer :0 [+0.29s] DEBUG: Starting greeter [+0.29s] DEBUG: Started session 2082 with service 'lightdm', username 'lightdm' [+0.36s] DEBUG: Session 2082 authentication complete with return value 0: Success [+0.36s] DEBUG: Greeter authorized [+0.36s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+0.36s] DEBUG: Session 2082 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter [+0.58s] DEBUG: Greeter connected version=1.2.1 [+0.58s] DEBUG: Greeter connected, display is ready [+0.58s] DEBUG: New display ready, switching to it [+0.58s] DEBUG: Activating VT 7 [+1.04s] DEBUG: Greeter start authentication for andrew [+1.04s] DEBUG: Started session 2137 with service 'lightdm', username 'andrew' [+1.09s] DEBUG: Session 2137 got 1 message(s) from PAM [+1.09s] DEBUG: Prompt greeter with 1 message(s) [+17.24s] DEBUG: Cancel authentication [+17.24s] DEBUG: Session 2137: Sending SIGTERM

    Read the article

  • Linux command to concatenate audio files and output them to ogg

    - by hasen j
    What command-line tools do I need in order to concatenate several audio files and output them as one ogg (and/or mp3)? If you can provide the complete command to concatenate and output to ogg, that would be awesome. Edit: Input files (in my case, currently) are in wma format, but ideally it should be flexible enough to support a wide range of popular formats. Edit2: Just to clarify, I don't want to merge all wmas in a certain directory, I just want to concatenate 2 or 3 files into one. Thanks for the proposed solutions, but they all seem to require creating temporary files, if possible at all, I'd like to avoid that.

    Read the article

  • Apache LDAP authentication (mod_auth_ldap) on MacOS Server (10.5)

    - by Ursid
    A - Is there a LDAP authentication module (mod_auth_ldap) for the version of Apache that comes built into MacOS Server 10.5? (I'm pretty sure no, but maybe someone compiled one.) B - If not, can it be compiled into MacOS' version of Apache? (Man, that would be nice.) 3 - If I can't use the Apple version of Apache for this, what is the best way to get Apache LDAP authentication working on MacOS Server 10.5? (Preferably one that works with MacOS Servers management software)

    Read the article

  • Archlinux/atheros WLAN configuration troubles

    - by GrinReaper
    I'm trying to config archlinux to use my wireless network adapter. It's quite troublesome. From what I've gathered, it's an atheros network adapter, using the ath5k driver/module... I can't get it to work; any ideas? Here's some of the output from my tinkering: # lspci | grep -i net 00:0a.0 Ethernet controller: nVidia corporation MCP67 Ethernet (reva2) 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) # lsusb ... Bus 004 Device 003: ID 03f0:17d Hewlett Packard Wireless (Bluetooth + WLAN Interface [Integrated Module] # ping -c 3 www.google.com ping: unknown host www.google.com #ping -c 3 8.8.8.8 ping: network is unreachable # lspci -v 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) ... Kernel driver in use: ath5k Kernel modules: ath5k # dmesg |grep ath5k registered as phy0 registered led device ath5k: atheros chip found PCI INT A disabled registered led device registered as phy1 # ip addr | sed '/^[0-9]/!d;s/: <.*$//' 1: lo 2: eth1 3: eth0 # ip link set <interface> up/down RNETLINK answers: Operation not possible due to RF-kill Also, is there a way to dump text from command-line to a text file so i can just copy pasta? Sorry, first time using a linux distro... EDIT: So I just tried this: I actually just did this twice. (I can't tell which setting is on/off for my wireless adapter. The lights are blue all the time now.) #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :yes 1: hp-bluetooth: bluetooth softblocked: no hardblocked :yes 3: phy1: wireless lan softblocked: no hardblocked :yes #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :no 1: hp-bluetooth: bluetooth softblocked: no hardblocked no 3: phy1: wireless lan softblocked: no hardblocked :yes 7: hci0: bluetooh 0: hp-wifi: wireless lan softblocked: no hardblocked :no I've dug around some other articles and it seems like ath5k is supposed to be preferable to madwifi, so should i be using madwifi? I'm 99% sure I disabled the hardblock (by turning it ON) but, as shown above, phy1 wireless lan is STILL hardblocked. What gives? Maybe I've made some more fundamental error in a basic config file? EDIT: I've fixed the hardblock. I've tried pinging www.google.com, but to no avail. I get: ping: unknown host www.google.com In the arch wiki: Edit /etc/hosts and add the same HOSTNAME you entered in /etc/rc.conf: 127.0.0.1 archlinux.domain.org localhost.localdomain localhost archlinux To my understanding, hostname is just a user-specified and based on preference(?) My /etc/rc.conf: HOSTNAME="gestalt" My /etc/hosts: 127.0.0.1 localhost.localdomain localhost gestalt but should it be the following? 120.0.0.1 localhost.domain.org localhost.localdomain localhost gestalt

    Read the article

  • How to use multiple SkyDrive accounts on one computer?

    - by user1563721
    Is there any way how to use multiple SkyDrive accounts on one computer running MAC OS X or Windows 8? I would like to sync data from different accounts to different folder and not to merge these accounts to one. The reason is that every SkyDrive has it's storage limits and I'm using every account for different work data. The result should be the following: I have a number of SkyDrive accounts every for different work, let's say: S1 S2 S3 I would like to sync exactly the same number of folders on computer using different accounts to sync them: SkyDriveS1Folder - (folder on computer which syncing the content of S1 SkyDrive) SkyDriveS2Folder - (folder on computer which syncing the content of S2 SkyDrive) SkyDriveS3Folder - (folder on computer which syncing the content of S3 SkyDrive) Is it possible somehow? I found a workaround for Windows machines (http://superuser.com/questions/525932/running-multiple-instances-of-microsoft-skydrive) but is there anything for MAC OS X machines? Or is it possible through any third party application?

    Read the article

  • Is special memory required for a MacBook Pro ?

    - by user38900
    I have a MacBook Pro (MacBookPro5,2 / 2.8 GHz) with 4 GB of ram (2x2GB). I'm looking to upgrade to 8GB. The memory in it now is DDR3 PC3-8500 1067. Checking out prices for 4 GB sticks of PC3-8500 there is about $100 difference for "apple certified" ram. Will any DDR3 PC3-8500 module work or is there really a difference?

    Read the article

  • Recompiling yum installation

    - by Saif Bechan
    I have installed Nginx using yum. Now to add modules to the existing installation i have to recompile Nginx from source. How can i recompile a yum installation. There is no source. Should I uninstall the yum package, and then download the source package and recompile it with the module and then install everything and reconfigure it again???

    Read the article

  • dmidecode showing less Memory Capacity than Motherboard spec?

    - by starchx
    We got a supermicro server, http://www.supermicro.com/products/system/1u/5016/sys-5016i-ur.cfm, according spec, the server supports up to 32G memory when using ECC Register Memory. However, when I tried the dmidecode command, it says 24G max memory: [root@c1 ~]# dmidecode | grep Maximum Maximum Size: 256 kB Maximum Size: 1024 kB Maximum Size: 8192 kB Maximum Memory Module Size: 4096 MB Maximum Total Memory Size: 24576 MB Maximum Capacity: 24 GB Maximum Value: Unknown Which one I should trust?

    Read the article

  • Error Trying To Use Microsoft LifeCam VX-3000

    - by Brian
    Hello, I bought a Microsoft LifeCam VX-3000 web camera for may parent's Dell Dimension 3000 computer running XP SP 3, and I cannot get it to run. THe installation ran successfully, but when I try to run it, I get the error: Faulting application lifecam.exe, version 3.21.263.0, faulting module kernel32.dll, version 5.1.2600.5781, fault address 0x00012afb. The microsoft help link really didn't help... how do I even resolve this type of error? Thanks.

    Read the article

  • Webmin Cluster Copy Protocol

    - by hozza
    Just toying with a clustered server farm for fun (as you do) and experimenting with Webmin and its 'clustered' modules. It has a feature that can copy files from one server to another on a repeating basis. Does this feature/module use cron jobs and what protocol does it use to copy the files? I have searched all about the net and yet I cannot find any decent documentation on webmin or its features. Is it just poorly documented or am I missing something?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >