Search Results

Search found 5904 results on 237 pages for 'hybrid storage'.

Page 157/237 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Are the new Hewlett-Packard "Sandy Bridge"-based notebooks dispatched globally?

    - by leladax
    I'm currently trying to figure out why a European chain retailer is delaying a dv7 [remaining code number is not same with american]; it is listed in their site for several days and while ordered on Monday they still don't have it in their central storage. In an earlier call I was advised that since Intel announces the processor now it may start dispatching from the 6th. Is that true? Is HP obligated not to dispatch it before the 6th? (normally/legally or as a 'deal between gentlemen'). Does anyone know if HP dispatches normally dv7s now? Did they intentionally not dispatch them to retailers before the CES?

    Read the article

  • OWA no longer accessing 1 backend exchange server

    - by Morchuboo
    We have IIS hosting OWA that is the web frontend to 3 backend exchange servers. Yesterday we got a lot of event 9791 warnings: "Cleanup of the DeliveredTo table for database 'Second Storage Group\Mailbox Store EUROPE 2' was pre-empted because the database engine's version store was growing too large. 0 entries were purged. At this point the server was crawling. Our Mail admin is currently away and not contactable so we rebooted the server. Everything seems ok when reading mail from outlook and evolution-mapi clients but OWA and active-sync connections cannot access. When logging into OWA, users whos mailboxes are not on this backend server are fine but users on this server can log into the OWA frontend but once submitting their credentials the page returns a 503 service unavailable error. We have since rebstarted the affected exchange server and the IIS server as well as iisreset /noforce but problem persists. Can anyone suggest what we should look at...

    Read the article

  • How to configure HA iSCSI for Solaris 10

    - by Noah
    BACKGROUND: We have a StarWind NAS that we are currently using for High Availability storage with our Windows network. Starwind has mirrored drives and multiple ip paths, that the Windows Server combines into one HA disk store. QUESTION: How do I accomplish the same thing under Solaris 10? I've looked at ZFS but to document seems to indicate that ZFS wants to do its own raid/mirroring. I can also attach via iSCSI from Solaris and am presented with both drives being served by the Starwind NS. So, how do I configure solaris so that disk M1 and M2 are considered as a single fault tolerant drive?

    Read the article

  • What postgresql client version should I build against, if server is 8.x?

    - by Ben Voigt
    I'm planning updates to a system that is currently running with 8.x server on Windows, 8.x client on Windows, and 8.x client on Linux. Obviously that seems like a bad choice of platform in a mixed environment, but the Linux machine has no persistent writable storage (as an anti-rootkit measure). I'm concerned with compatibility between versions right now. Can a linux postgresql 9.0.x client connect to a Windows 8.x server? The server is using some third-party binary extensions, so upgrading it is a more involved task and will be done later. If combining a 9.0.x client and 8.x server is discouraged, would latest 8.x clients be able to continue to connect if I did upgrade the server first? META: What tag is appropriate for backward-compatibility questions?

    Read the article

  • Is it possible to install and use a printer in Azure Worker Roles?

    - by lurkerbelow
    I'd like to know if it's possible to create and use a printer in Azure Worker Roles. I do know that I can install printers with basic batch commands. So I could define a batch script that would run as a startup task. something like: rundll32 printui.dll,PrintUIEntry /if /b "printer" /f %windir%\inf\ntprint.inf /r "file:" /m "printername") Question: Can I use a printer to print to file, maybe to local storage? I do need the function to print to file or atleast to have a printer installed, because I have to get the PCL output from different installed printers. Sadly I can not test it by myself. I do not have a CC to join the 90 days trial.

    Read the article

  • API-based solutions for sending payments to people without bank accounts

    - by Tauren
    I'm looking for inexpensive ways to send payments to hundreds or thousands of individual contractors, even if they do not have a bank account. Currently I only need to support payment in the USA, but may eventually be international. Here's the scenario: I offer a service that allows an organization or manager-type person to coordinate contractors for very short term jobs. These jobs are typically only an hour or two in length. A contractor may get only one job over an entire month, several jobs spread out over a month, multiple jobs on a single day, or any other combination. Thus, a single contractor could earn as little as one job's payment up to potentially payment for dozens. Payment for a month could be as little as $10 up to $1000's. Right now, the system provides payroll reports to the manager and it is the manager's responsibility to produce checks, stuff envelopes, and send mail via the US postal service. I'd like to remove this burden from the manager and have all the payments taken care of for them automatically by the system. I'm not sure where to start or what the best options would be. I'm starting to look into the following solutions, but don't know specifics yet and would like some advice before pursuing them. I'd also like to hear about other ideas or suggestions. PayPal (Send Money, Adaptive Payments, x.com, other???) Amazon (Flexible Payments System?) Fund some sort of pre-paid debit card? Web service with API that mails checks for you? Direct deposit via a bank API (for users with bank accounts)? The problem is that many of these contractors may not be able to obtain bank accounts or credit cards within the USA. I don't mind doing a hybrid of solutions, but are there any that would work well with this issue? I want the solution to be easy to use for the contractors, meaning that they can get the money easily (via check in the mail, debit card ATM withdrawal, etc.)

    Read the article

  • designing an ASP.NET MVC partial view - showing user choices within a large set of choices

    - by p.campbell
    Consider a partial view whose job is to render markup for a pizza order. The desire is to reuse this partial view in the Create, Details, and Update views. It will always be passed an IEnumerable<Topping>, and output a multitude of checkboxes. There are lots... maybe 40 in all (yes, that might smell). A-OK so far. Problem The question is around how to include the user's choices on the Details and Update views. From the datastore, we've got a List<ChosenTopping>. The goal is to have each checkbox set to true for each chosen topping. What's the easiest to read, or most maintainable way to achieve this? Potential Solutions Create a ViewModel with the List and List. Write out the checkboxes as per normal. While writing each, check whether the ToppingID exists in the list of ChosenTopping. Create a new ViewModel that's a hybrid of both. Perhaps call it DisplayTopping or similar. It would have property ID, Name and IsUserChosen. The respective controller methods for Create, Update, and Details would have to create this new collection with respect to the user's choices as they see fit. The Create controller method would basically set all to false so that it appears to be a blank slate. The real application isn't pizza, and the organization is a bit different from the fakeshot, but the concept is the same. Is it wise to reuse the control for the 3 different scenarios? How better can you display the list of options + the user's current choices? Would you use jQuery instead to show the user selections? Any other thoughts on the potential smell of splashing up a whole bunch of checkboxes?

    Read the article

  • HTML5 or Flash?

    - by lewiguez
    I have to write a web application for a client soon. Looking at the specs, there is no reason why the project couldn't be an HTML5/CSS/Javascript project, but the client is arguing that it has to be Flash. The project has a number of dynamic elements and is web-based. It'll only be used in-house by a small number of people and all of those people use either Google Chrome or Safari 4. They are all pretty tech-savvy to boot. My question is this: what are some of the reasons (preferably technical since this is Stack Overflow) I can present to my client as to why HTML5 is better than Flash (that's assuming I'm right and it is in this case)? Is it OK to use HTML5 even though it's still a draft spec (I'm assuming it is after checking out all those Apple HTML5 demos a few days ago)? Also, would a hybrid approach be preferable for now? Something that uses Flash wherever the canvas object would've been used in the HTML5 approach and that conforms to a normal XHTML approach. Help!

    Read the article

  • JFFS2 poor mount performance

    - by Marcin Polkowski
    I run multiple ARM boards with Debian Linux installed. Board is equipped with 512 MB of NAND memory. I've observed that after ~3 months of continuous run booting time increased significantly - it takes over 3 minutes to mount filesystem (JFFS2). System was using about 35% of available storage so I’ve removed unnecessary files (got to ~18%) but this didn't change anything. Then I realized that my software produces directories that are left empty so I’ve removed ~500 empty and unnecessary dirs. This didn’t help either. After system is started I see JFFS2 garbage collector (jffs2_gcd_mtd4) running and occupying over 90% of CPU. Now my question: is there a way to „optimize” JFFS2 filesystem for better performance - faster booting (my system have limited timid to boot up)? It would be great if this optimization could be done remotely - I have no physical access to boards.

    Read the article

  • Is there a way to tell if a file is done copying?

    - by Mike Cooper
    The scenario is this: Machine A has files I want to copy to Machine C. Machine A can't access C directly, but can access Machine B that can access Machine C. I am using scp to copy from Machine A to B, and then from B to C. Machine B has limited storage space, so as files come in, I need to copy them to C and delete them from B. The second copy is much faster, so this is no problem with bandwidth. I could do this by hand, but I am lazy. What I would like is to run a script on B or C that will copy each file to C as each one finishes. The scp job is running from A. So what I need is a way to ask (preferably from a bash script) if file X.avi is "done" copying. Each of these files is a different size, and I can't really predict size or time of completion. Edit: by the way, the file transfer times are something about 1 hour from A to B and about 10 minutes from B to C, if time scale matters at all.

    Read the article

  • Move the ESXi service console from eth0 to eth1.123

    - by Mircea Vutcovici
    I have an VMware ESXi 4.0.0 with 2 physical network cards. First one, eth0, has only the Service Console and the other one, eth1, is a trunk with all VLANs (including the management VLAN used by the Service Console). I would like to free eth0 port to be able to connect a network storage and I would like to move the management IP from eth0 to eth1/VLAN123. Can I do this remotely? Is it possible from vSphere client? Should I do it from the ESXi console?

    Read the article

  • Optimal Instance Size for EC2 Sharepoint Server

    - by Rob Wilkerson
    I'm surprised that I can't find any info about this, but I'm not a Windows admin and just a novice EC2 user. I have a client who wants to stand up a Sharepoint server on EC2 for internal use. The team is small (10-20) folks and traffic will be light. Mostly, the client is looking for one place to store documents (and revisions of documents) while making access easy for authenticated users anywhere in the world. They've settled on Sharepoint and have other EC2 instances so that seems like the natural fit, but I'm trying to figure out what to recommend for them. I'm currently thinking about a Medium instance. I'm afraid to go smaller because I think Windows would need a fair amount of memory just to run, but I'm very open to suggestions. Any advice would be much appreciated. I expect that the storage itself would happen in an EBS mount, but again, suggestions welcome. Thanks for your input.

    Read the article

  • ext4: error loading journal

    - by cloudyOutside
    I have an external hard drive with two partitions: A small FAT32 which is mostly empty and works fine and a large ext4 with tons of data, most of which isn't backed up. The ext4 is visible, but can't be mounted. I get an "error loading journal" error. The drive is a Western Digital Caviar Blue 500GB. Roughly 30GB of that is FAT32 and the rest is the ext4. The light on the enclosure turns red when reading from the bad partition. It was made by Cavalry. There wasn't any warning, but coincidentally, I've been thinking lately that I should get two large capacity drives for real backups. Is there anything that can be done? I'm not even sure I have enough storage to backup everything even if it is redeemable.

    Read the article

  • How full is too full for mechanical hard drives?

    - by Sunny Molini
    I have heard many claim that it doesn't matter how full a drive is until it starts cutting into temp and virtual memory space. This doesn't make sense to me, given the nature of how the data is transacted on a hard drive. The inside of the platter presents less data per revolution than the outside of the drive does, by significant factors. The inside 40% of the radius of full size hard drive is used for the spindle, so only the outside 60% is used for data storage, but that still means that the inside track of a hard drive presents data 60% slower than the outside track. By my calculation, a Hard drive that is only 10% full should perform about 2.25 times faster than a hard drive that is 90% full, assuming that the flow is constrained by other factors. Am I wildly off base here? For all the drives I know, even the top speeds of the first 1% of the drive would be well within the bandwidth provided by a SATA 2 connection.

    Read the article

  • How do you passthrough native SATA drives to a guest on ESXi?

    - by John
    I have ESXi 4.0 running on an Intel DX58S0 Mothboardboard with an Intel Core i7 930 processor. VT-d is also enabled. I have three drives in the system, drive 0 is used for ESXi. Drive 1 and 2 contain data from an older machine and show up under the "Storage Adapters" section in configuration. I would like to allow a guest machine to access the data on these drives (as nativly as possible). I have enabled passthrough of the motherboard's built in SATA controller (Intel/Marvell 88SE6121 ). This controller shows up in my guest OS, but the guest shows no drives aside from the normal virtual drive. I have tried a Linux guest and Windows7. I have also configured the host machine to try IDE/RAID/ACHI modes for the SATA controller. Any ideas how I can configure one of my guests to get at the raw data on these drives?

    Read the article

  • Any limitations for putting an SSD in a Mini? How fast would an external HDD be via Firewire? Is Ser

    - by Cyrcle
    I'm considering getting a Mini for web programming. I do a lot of text searches so I want to put a SSD in it. Does the Mini have any limitations that might effect the performance of a SSD? I'm trying to decide if I should get a Mini Server. I'd like to be able to have two internal drives so one can be SSD for OS and the code I'm working on, and the other can be my storage drive. However, I'm not sure if I'll be using the extra functionality of the server edition OSX or not, so I'm reluctant to pay the $200 premium. In a "regular" Mini I could put the SSD internal and use an external big drive, but would the external drive be fast enough via Firewire? Thanks in advance for any info.

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • How can I compare effective power usage of two CPUs / CPU+Mobo+Mem combinations?

    - by einpoklum
    I have this server which does mostly file sharing (with the associated storage). No serious number crunching and it isn't the firewall. My current box has a Celeron D processor (Prescott 336 2.8 GHz); and I'm considering replacing it with a Pentium D (Smithfield 805 2.66 GHz) - for reasons which do not involve performance. How can I know whether one can expect a higher or lower power consumptipn for the change? And how can I estimate the power consumption for each option?

    Read the article

  • ESXi5 - management services crashes - vms running

    - by Frederik Nielsen
    I have a setup with two ESXi5 servers. We are(were) running with a ISCSi box to server disk for the VM's - however we are in the progress of migrating away from it, because the storage os disk is bad. Now, one of the ESXi hosts has been running for ~20hrs, and it seems like the management services just crashed on that host.. The vms are still running - so it's not really serious. However, I want to fix it. Should I be worried? Will the VM's keep running? The hosts does respond on pings. I am running a vcenter to administrate the hosts. Thanks in advance.

    Read the article

  • Virtual Windows Servers and Pagefile location [closed]

    - by Luke Puplett
    Considering that Windows makes heavy use of the pagefile even with huge amounts of RAM available, is it not best to have this pagefile on the fastest disk possible as close to the virtual systems as possible? I'm thinking, RAM disk. Where I work, storage for VMs is out on a NAS/SAN. I'm worried that so much memory access is having to go across the network! As a side, I think its about time MS got rid of paging and told us to buy more DIMMS. UPDATE So this question has been downvoted??! Accessing a local spindle is C40,000 times slower than a DIMM, so going over the network will be even slower for hard faults. I don't know why I got the downvote, I'm certain that this is an issue unless there's some other mechanism in ESX/HyperV that manages this.

    Read the article

  • Mysql migrate huge db from innodb to ndbcluster Err: the table is full

    - by Nguyen Trong Nhan
    I'm trying to migrate old database to mysql cluster (4 data nodes) by using command: ALTER TABLE sample ENGINE=NDBCLUSTER but I'm getting the following error: The table '#sql-7ff3_3' is full There are approximately 300 mil rows in this table. Here are my config file: /mysql-cluster/config.ini [NDBD DEFAULT] NoOfReplicas=2 DataDir=/data/mysql-cluster/ndb/ BackupDataDir=/data/mysql-cluster/backup/ DataMemory=10G IndexMemory=5G TimeBetweenLocalCheckpoints=6 FragmentLogFileSize=256MB NoOfFragmentLogFiles=50 MaxNoOfOrderedIndexes=8000 MaxNoOfConcurrentOperations=100000 MaxNoOfTables = 10000 RedoBuffer=128M MaxNoOfAttributes=5000 MaxNoOfUniqueHashIndexes=1024 /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/data/mysql-cluster/mysqld/ event_scheduler=on default-storage-engine=ndbcluster ndbcluster ndb-connectstring=192.168.x.x,192.168.x.x innodb_file_per_table innodb_buffer_pool_size = 512MB key_buffer = 512M key_buffer_size = 512M sort_buffer_size = 512M table_cache = 1024 read_buffer_size = 512M

    Read the article

  • Firefox cannot recognize certificates for well knows sites

    - by RCola
    When trying to connect to well known sites, for instance hotmail.com Firefox shows that This Connection in Untrusted. In the OptionsAdvancedCertificates it's configured to select one matching certificate automatically. Why Firefox does not trust current connection? Can it be Man-in-the-middle attack or it's something like broken certificate storage on my computer? UPDATE UPDATE2 Solved: the problem is Antivirus Web Access protection. It interferes with HTTPS connection. Similar to Man-in-the-middle? Why ESET cannot do it correctly?

    Read the article

  • Are there any benefits to using a Distributed vSwitch for iSCSI?

    - by dunxd
    I am designing our vSphere farm - we'll be migrating from ESX 3.5 to 4.1. I plan to set up a new farm using ESXi 4.1, and move the Virtual Machines on the 3.5 farm into it by shutdown, then import. In ESX 3.5 there is no distributed networking, so each host has a vSwitch connected to my SAN NICs, and a port group for the vmkernel. In vSphere (ESXi 4.1) I have the extra option to set up a distributed vSwitch and distributed port groups for vmkernel to access iSCSI storage. Is there any benefit to this, or should I stick to non-distributed networking for iSCSI.

    Read the article

  • Mount Docker container contents in host file system

    - by dflemstr
    I want to be able to inspect the contents of a Docker container (read-only). An elegant way of doing this would be to mount the container's contents in a directory. I'm talking about mounting the contents of a container on the host, not about mounting a folder on the host inside a container. I can see that there are two storage drivers in Docker right now: aufs and btrfs. My own Docker install uses btrfs, and browsing to /var/lib/docker/btrfs/subvolumes shows me one directory per Docker container on the system. This is however an implementation detail of Docker and it feels wrong to mount --bind these directories somewhere else. Is there a proper way of doing this, or do I need to patch Docker to support these kinds of mounts?

    Read the article

  • Why does Hyper-V and Windows Backup crash (BSOD) after successfull backup?

    - by Payson Welch
    Hello I am running Server 2008 R2 with a handful of Hyper-V guest nodes. If Windows backup runs without any of the Hyper-V nodes running, the server is fine. If Hyper-V runs a backup while the Hyper-V nodes are running, it is fine until a few minutes after the backup completes, and then it BSODs. The storage location for the backup is iSCSI - I am wondering if anyone has any input on what might be causing this? I don't have the Hyper-V nodes setup on a vlan and there is only one NIC on the server. Is it possible this is a networking / driver issue, and if so how would I reconfigure the networking to fix this?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >