Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 120/271 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • INNODB mysql. Plugin disabled

    - by alexcunn
    When I startup mysql on my unbuntu server I will get a message. 121122 17:39:37 [Note] Plugin 'FEDERATED' is disabled. 121122 17:39:37 InnoDB: The InnoDB memory heap is disabled 121122 17:39:37 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121122 17:39:37 InnoDB: Compressed tables use zlib 1.2.3.4 121122 17:39:37 InnoDB: Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 121122 17:39:37 InnoDB: Completed initialization of buffer pool 121122 17:39:37 InnoDB: Fatal error: cannot allocate memory for the buffer pool 121122 17:39:37 [ERROR] Plugin 'InnoDB' init function returned error. 121122 17:39:37 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121122 17:39:37 [ERROR] Unknown/unsupported storage engine: InnoDB 121122 17:39:37 [ERROR] Aborting 121122 17:39:37 [Note] mysqld: Shutdown complete a few times I have got a message saying that the plugin is disabled. I use webmin to configure it. Could that be a problem?

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • development server?

    - by ajsie
    for a project there will be me and one more programmer to develop a web service. i wonder how the development environment should be like. cause we need central storage (documents, pictures, business materials etc), file version handling, lamp (testing the web service) etc. i have never set up an environment for this before and want to have suggestions from experienced people which tools to use for effective collaboration. what crossed my mind: seperate applications: - google wave (for communication forth and back, setting up guide lines, other information) - team viewer (desktop sharing) - skype (calling) vps (ubuntu server): - svn (version tracking) - ftp (central storage) - lamp (testing the web service) - ssh (managing the vps) is this an appropriate programming environment? and regarding the vps, is it best practice to use ONE vps for all tasks listed up there? all suggestions and feedbacks are welcome!

    Read the article

  • UnicodeEncodeError when uploading files in Django admin

    - by Samuel Linde
    Note: I asked this question on StackOverflow, but I realize this might be a more proper place to ask this kind of question. I'm trying to upload a file called 'Testaråäö.txt' via the Django admin app. I'm running Django 1.3.1 with Gunicorn 0.13.4 and Nginx 0.7.6.7 on a Debian 6 server. Database is PostgreSQL 8.4.9. Other Unicode data is saved to the database with no problem, so I guess the problem must be with the filesystem somehow. I've set http { charset utf-8; } in my nginx.conf. LC_ALL and LANG is set to 'sv_SE.UTF-8'. Running 'locale' verifies this. I even tried setting LC_ALL and LANG in my nginx init script just to make sure locale is set properly. Here's the traceback: Traceback (most recent call last): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 307, in wrapper return self.admin_site.admin_view(view)(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/views/decorators/cache.py", line 79, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 197, in inner return view(request, *args, **kwargs) File "/srv/django/letebo/app/cms/admin.py", line 81, in change_view return super(PageAdmin, self).change_view(request, obj_id) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 28, in _wrapper return bound_func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 93, in _wrapped_view response = view_func(request, *args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/utils/decorators.py", line 24, in bound_func return func(self, *args2, **kwargs2) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/transaction.py", line 217, in inner res = func(*args, **kwargs) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 985, in change_view self.save_formset(request, form, formset, change=True) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/contrib/admin/options.py", line 677, in save_formset formset.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 482, in save return self.save_existing_objects(commit) + self.save_new_objects(commit) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 613, in save_new_objects self.new_objects.append(self.save_new(form, commit=commit)) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/forms/models.py", line 717, in save_new obj.save() File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 504, in save_base self.save_base(cls=parent, origin=org, using=using) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/base.py", line 543, in save_base for f in meta.local_fields if not isinstance(f, AutoField)] File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 255, in pre_save file.save(file.name, file, save=False) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/db/models/fields/files.py", line 92, in save self.name = self.storage.save(name, content) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 48, in save name = self.get_available_name(name) File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 74, in get_available_name while self.exists(name): File "/srv/.virtualenvs/letebo/lib/python2.6/site-packages/django/core/files/storage.py", line 218, in exists return os.path.exists(self.path(name)) File "/srv/.virtualenvs/letebo/lib/python2.6/genericpath.py", line 18, in exists st = os.stat(path) UnicodeEncodeError: 'ascii' codec can't encode characters in position 52-54: ordinal not in range(128) I tried running Gunicorn with debugging turned on, and the file uploads without any problem at all. I suppose this must mean that the issue is with Nginx. Still beats me where to look, though. Here are the raw response headers from Gunicorn and Nginx, if it makes any sense: Gunicorn: HTTP/1.1 302 FOUND Server: gunicorn/0.13.4 Date: Thu, 09 Feb 2012 14:50:27 GMT Connection: close Transfer-Encoding: chunked Expires: Thu, 09 Feb 2012 14:50:27 GMT Vary: Cookie Last-Modified: Thu, 09 Feb 2012 14:50:27 GMT Location: http://my-server.se:8000/admin/cms/page/15/ Cache-Control: max-age=0 Content-Type: text/html; charset=utf-8 Set-Cookie: messages="yada yada yada"; Path=/ Nginx: HTTP/1.1 500 INTERNAL SERVER ERROR Server: nginx/0.7.67 Date: Thu, 09 Feb 2012 14:50:57 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: close Vary: Cookie 500 UPDATE: Both locale.getpreferredencoding() and sys.getfilesystemencoding() outputs 'UTF-8'. locale.getdefaultlocale() outputs ('sv_SE', 'UTF8'). This seem correct to me, so I'm still not sure why I keep getting these errors.

    Read the article

  • sudo/su command for Red Hat Server 5.4

    - by rednaxela
    Without going into too much detail, I need to execute one linux command on redhat with root user access. Red Hat Server 5.4 does not recognise the sudo command. The command su can be used to switch to the root user on redhat, but su cannot be done in one line. For example the command: su ; cd opt/storage/RootAccessFolder will not work because this only switches you to root, then executes the cd command once you have logged out from the root user. I guess what i'm looking for is like a.. sudo cd opt/storage/RootAccessFolder but I say again, sudo doesn't work. Any ideas?

    Read the article

  • Email Servers that Abstracts Mailbox Concepts [on hold]

    - by David
    Lately I've been really interested in doing some very unique things with email most of which rely on a SMTP and POP or IMAP server that gives the administrator an API to create arbitrary methods for email storage, notifications, or delivery. What I'm looking for would be analogous to mod_php and apache where apache handles the delivery protocol and php handles the content creation and storage. I've considered making my own, as those three protocols are quite simple, but I'm always nervous about putting my code public facing especially when it's at that low of a level. So are there any email servers that allow for this much arbitrary control over email delivery, fetching, and receiving.

    Read the article

  • selective backup script in bash

    - by Sake
    Hi, I've been using this simple command (that's all I can do :) to backup the whole tree from my user data in NAS server for a year. cp -r /STORAGE /BACKUP-STORAGE/YYYY-MM-DD Unfortunately, after a year of service. My user start filling the spaces with lot of photo and cliparts (jpg, gif, bmp) And that start to make my backup process get much slower. The space is also a big issue. Now I no longer have enough space for a week-long daily backup set. I think I want to change from backup everything to backup only non-image data. How can I exclude jpg, gif, and bmp from the backup ? It's quite easy with DOS XCOPY command, but I really have no idea how to do that in bash. Thanks

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • Device Manager - does USB listing look right?

    - by Carl
    I obtained the drivers from the manufacturer for my HT-Link NEC USB 2.0 2-port Cardbus card. When I plugged in the card before I got the drivers, 3 new entries showed up in the Device Manager - two "NEC PCI to USB Open Host Controller" and one "Standard Enhanced PCI to USB Host controller." With the card plugged in, I uninstalled those two drivers. I then removed the card. I copied the new drivers to c:\windows\system32\drivers and the .inf file to c:\windows\inf. I also copied the drivers & inf to a new directory called c:\windows\drivers\ousb2. I reinserted the card. Windows automatically installed the same drivers as before. I selected 'update driver' on the "NEC PCI to USB..." entry and didn't see any other options. I then selected 'have disk' and pointed to c:\windows\drivers\ousb2 and got a message "The specified location does not contain information about your hardware." I then selected 'update driver' on the "Standard Enhanced PCI to USB...," and manually selected "USB 2.0 Enhanced Host Controller" (OWC 4/15/2003 2.1.3.1). Windows then automatically found a USB root hub, and I manually selected "USB 2.0 Root Hub Device" (OWC 4/15/2003 2.1.3.1). Now there are two sections in the Device Manager titled "Universal Serial Bus controllers." I plugged in my external USB hard disk adapter, and "USB Mass Storage Device" was added to the first set. Here's how it looks (w/drivers from the properties): [Universal Serial Bus controllers] Intel(R) 82801DB/DBM USB 2.0 Enhanced Host Controller - 24CD (6/1/2002 5.1.2600.0) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C2 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C4 (7/1/2001 5.1.2600.5512) Intel(R) 82801DB/DBM USB Universal Host Controller - 24C7 (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) NEC PCI to USB Open Host Controller (7/1/2001 5.1.2600.5512) USB Mass Storage Device USB Root Hub (7/1/2001 5.1.2600.5512) (5 more USB Root Hubs - same driver) [Universal Serial Bus controllers] USB 2.0 Enhanced Host Controller (OWC 4/15/2003 2.1.3.1) USB 2.0 Root Hub Device (OWC 4/15/2003 2.1.3.1) When I unplug the card the two "NEC PCI to USB..." entries in the first set disappear, and the whole second set disappears. (I unplugged the hard disk adapter first...) The hard disk adapter still doesn't work in that Cardbus card with the new drivers. I don't think the above looks right - a second set of USB controllers listed in the Device Manager, and the NEC entries still in the first set, and the the USB mass storage device still in the first set. Any help appreciated. (Windows XP PRO SP3 w/all current updates.)

    Read the article

  • NETAPP Fragmentation

    - by mdpc
    We all know that once a disk (or storage system for that matter) gets introduced into use, the performance degrades due to fragmentation of files. This seems to be why disk defragmentors are in fairly wide use on Windows boxes. And they do increase the performance substancially. As an aside, I haven't heard of many defragmentors in the Unix/Linux area. Despite the claimed WAFL protections for the NetApp, file fragmentation still will occur, especially with all the sparsely crated VMs. My question is does anybody do any sort of defragmention of such a storage system? Do you notice any measurable degration/improvement of either not doing/doing anything to address this situation? Does anybody do anything about it? If so what? Thanks

    Read the article

  • (simple) linux HA with vmware vsphere?

    - by derhelge
    I hope my upcoming question is specific enough, and you are able and willing to support :-) We have several openSUSE VMs in an ESX-Cluster (three ESX-Servers) with an attached iSCSI-SAN. All of those Linux VMs are "single point of failure"-configured, which means in the case of a Web-Server: LAMP, storage, etc. everything on this machine. This was very simple and in case of a failure (in the last years: kernel panics or apache crashes) a simple reboot triggered by a script did it. But the problem is: How to upgrade/maintain the w(eb-)application or the underlying OS without downtime? This wasn't really managable and i did this in the early morning ;) How can i achieve a "simple" High-Availability Cluster now? I thought of: DRBD with heartbeat with 2 VMs. And for the storage a RDM (raw device mapped) LUN and change the read-write-permissions for both VMs. Is this a good idea? Anyone has a better solution?

    Read the article

  • connecting to bsnl wll, with huawei fixed wireless modem with connected to usb port on linux mint.

    - by Rakesh
    On windows, when I plugged in the usb connector, to the device was recognized as a mass storage device and installed all the drivers, to my computer, then the device, automatically restarted and started behaving like a modem, I'm writing this post, using the same modem, through windows... On linux mint (debian kernel, similar to ubuntu 8.10), the device is recognized as a mass storage device, some times, but, has no programs, useful for linux... When I use the modem on windows and restart the computer and login to linux, the device shows up as "not yet configured" in the terminal command "lsusb"... I googled a lot for solutions, tried many things, atlast, configured it and run the command "wvdial". But, get the error, that the modem is not responding, at last ! :'( Please help me out... Many more people are facing this problem, as I could discover, when I googled for it. this is the website address of huawei, the company of my modem... "www.huawei.com" Specification: model: ETS1220 FEQ: 800M Thank you.

    Read the article

  • Feedback on Using ZFS and FreeBSD

    - by ToiletOverflow
    I need to create a server that will be used solely for backing up files. The server will have 2TB of storage to begin with but I may want to add additional storage later on. As such, I am currently considering using FreeBSD + ZFS as the OS and file system. Is ZFS a reliable, trusted file system? Should I use it in this scenario? I have read that ZFS should be used with OpenSolaris over FreeBSD as OpenSolaris is usually ahead of the curve with ZFS as far as version updates and stability. However, I am not interested in using OpenSolaris for this project. An alternative option that I am considering is to stick with ext3 and create multiple volumes if need be, because I know that I will not need a single, continuous volume larger than 2TB. Thanks in advance for your feedback.

    Read the article

  • Best practice for assigning private IP ranges?

    - by Tauren
    Is it common practice to use certain private IP address ranges for certain purposes? I'm starting to look into setting up virtualization systems and storage servers. Each system has two NICs, one for public network access, and one for internal management and storage access. Is it common for businesses to use certain ranges for certain purposes? If so, what are these ranges and purposes? Or does everyone do it differently? I just don't want to do it completely differently from what is standard practice in order to simplify things for new hires, etc.

    Read the article

  • Are FC and SAS DAS devices standard enough?

    - by user222182
    Before I ask my questions, here is some background info that may or may not be useful: For the first time I find myself needing a DAS solution. My priority is data through-put in a single direction. I can write large blocks, and I don't need to read at the same time. The server (the data producing device) is not really a typical server, its a very powerful single board computer. As such I have limited options when it comes to the add-in cards I can install since it must use the fairly uncommon interface, XMC. Currently I believe I am limited PCIex8 gen 1 which means that the likely bottle neck for me will be this 16gbps connection. XMC Boards I have found so far offer the following connections: a) Dual 10GBE ethernet controller, total throughput 20gbps b) Dual Quad SAS 2.0 Connectors (SFF-8XXX) HBA (no raid), total throughput 48 gbps c) Dual FC 8gb HBA (no raid), total throughput 16gbps My questions for you guys are: 1) Are SAS and/or FC, and by extension their HBAs, standard enough that I could purchase a Dell or Aberdeen storage server with a raid controller that has external SAS or FC ports and expect that I can connect it to my SAS or FC HBA, be presented with a single volume (if I so configured the storage server), all without having to check for HBA compatibility? 2) On a device like a Dell PowerVault (either DAS or NAS) is there an OS on it to concern myself with, or is it meant to be remotely managed? Is there a local interface in case I cant remotely manage it (i.e. if my single board computer uses an OS not supported by Dell OpenManage). Would this be true of nearly any device which calls itself a DAS? 3) If I purchase some sort of Supermicro storage chassis, installed a raid controller with external connections, is there a nice lightweight OS I can run just for management of the controller? Would I even need an OS since the raid card would be configured pre-boot anyway? 4) It is much easier to buy XMC based 10gigabit ethernet cards (generally dual port). In what ways would I be getting into trouble by using iSCSI as a DAS are direct cabling with SFP+ cables? Thanks in advance

    Read the article

  • New standalone ESXi 5 deployments - USB versus SD card?

    - by ewwhite
    Now that the old full VMWare ESX with service console is no longer, I'm redeploying some standalone ESXi servers. I'm using HP ProLiant ML and DL G6 and G7 servers. Does it make more sense to utilize the internal USB port for ESXi or the internal SD card slot? I'm using the HP ESXi 5 build, but am not sure what the recommended practice is. Any recommendations on cards/USB drives for this purpose? BTW - these will be all-in-one storage servers with the onboard disk storage presented via PCIe passthrough.

    Read the article

  • how to pipe data to sftp connection?

    - by JMW
    ftp supports the put "|..." "remote-file.name" command to pipe data to an ftp connection. Is there something similar available for sftp? In sftp i get the following error: sftp 'jmw@backupsrv:/uploads' sftp> put "| tar -cx /storage" "backup-2012-06-19--17-51.tgz" stat | tar -cv /storage: No such file or directory as above the sftp client doesn't obviously execute the command. i want to use the pipe command to directly redirect the file stream to sftp. (because there is not enough space left to create a backup file on the same disk before uploading it to sftp server.)

    Read the article

  • Windows Server 2003 R2 sp2 and Exchange 2003 - missing pop-up menu "Exchange Tasks"

    - by Denis
    I need to recover database. That's what i doing step-by-step: - In Exchange System Manager check the server - Create new Recovery Storage Group - Add Database to recover - Mount store for it Database (Mailbox Store) - all finish successful Next step - I need check user and in pop-up menu click on "Exchange Tasks...", but in menu i see only "Help". Main question - why I have not "Exchange Tasks" and how I can get it? But I can see "Exchange Tasks" in "First storage Group"-Mailbox-User. Sorry for my bad English. Thanks, Denis

    Read the article

  • Horrible performing RAID

    - by Philip
    I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB. Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for 10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size. Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

    Read the article

  • Distributed file systems

    - by Neeraj
    I need to implement a distributed storage system for a set of nodes(devices) connected in a mesh network. So what basically my design goals are: The storage system should be capable of handling dynamic entry and exit of nodes. Replication (for fault tolerance). For this i am thinking of using a Distributed file system. Every node could access data in the other nodes in a transparent manner. Are there some simple, easily pluggable opensource implementations? Thanks for your thoughts!

    Read the article

  • Can't access USB drive anymore

    - by marie
    I have a 32 GB Lacie Cookey USB flash disk that doesn't show in the Computer window but it's visible as a device. cmd > diskpart DISKPART> list disk Disk ### Status Size -------- ------------- ------ Disk 0 Online 149 G Disk 1 No Media 0 DISKPART> select disk 1 Disk 1 is now the selected disk. DISKPART> clean Virtual Disk Service error: There is no media in the device. It also appears in the Disk Management tool, but the box is empty. Is there anything I can do or is it dead? ............................................................ output from ChipGenius: Description: [F:]USB Mass Storage Device(LaCie CooKey) Device Type: Mass Storage Device Protocal Version: USB 2.00 Current Speed: High Speed Max Current: 200mA USB Device ID: VID = 059F PID = 103B Serial Number: 070535924B170C18 Device Vendor: LaCie Device Name: CooKey Device Revision: 0100 Manufacturer: LaCie Product Model: CooKey Product Revision: PMAP Controller Vendor: Phison Controller Part-Number: PS2251-67(PS2267) - F/W 06.08.53 [2012-09-26] Flash ID code: 983AA892 - Toshiba [TLC] Tools on web: http://dl.mydigit.net/special/up/phison.html

    Read the article

  • Best practice? Using DPM to backup VMs within each VM or through the host?

    - by andrew
    We've got two Hyper-V hosts running multiple VMs (all flavors of Windows Servers). One of the VMs is running MS Data Protection Manager 2010, which runs beautifully (most of the time) and is connected to a separate NAS via iSCSI for the DPM storage. I noticed when I installed the DPM agent on the Hyper-V hosts, it enumerates the VMs in the DPM Protection listing. I don't want to burn through my storage space too fast with duplicate protection, so I was wondering: Is it recommended to back up VMs through the host, or is it better to install the DPM agent on each VM and backup as I would any other machine? It would seem as though most people (currently including me) do it the second way, but is there any advantage to including the entries under HyperV (Backup using Child Partition Snapshop)?

    Read the article

  • Printer options follow Office documents

    - by tkalve
    One person (John) creates an Office document, and prints this document to his HP printer which is using HP Universal Printing PS (v4.7) driver. He has got Job Storage (Personal job) enabled for this printer, with custom username and a personal PIN. He later sends this document in an e-mail to his colleagues. Another person (Anne) opens the document, and tries to print the document to her HP printer (also using HP Universal Printing driver), but is not able to fetch it on the printer. The Job Storage options from Johns computer follows the Office Excel document, so Anne has to change this manually to her username and her PIN before she can print. What on earth is causing this, and how do we fix it?

    Read the article

  • Can Time Capsule backup its own network share or can Time Machine be setup to do so?

    - by Sam Brightman
    Apple's Time Capsule can act as both a backup drive and a network shared drive at the same time. Can it backup its own data to incremental backups? I can't find this information anywhere. I've not used Time Machine so I don't know if it can be configured to backup shared drives (independently of whether Time Capsule has the option). I would like this to be a feasible way of freeing up some storage on an older MacBook Air by keeping large media such as photos archived on the drive like you would a USB external drive. But having the only copy accessible throughout the house and not backed up would make me nervous. I like the idea of having a good router, network storage and backup in one box, although obviously it's safer to backup the Capsule itself to USB occasionally too. Other solutions to this are of course welcome if this is not possible.

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >