Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 18/238 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • REMINDER : SPARC T4 Servers and ZFS Storage Appliance Demo Equipment Purchase Opportunity

    - by Cinzia Mascanzoni
    Please mark your calendars for the SPARC T4 Servers and ZFS Storage Appliance Demo Program webcast on November 22nd at 12 noon GMT/ 1pm CET and learn how you can take the maximum advantage from this unique opportunity. The objective of this call is to share value, details, guidelines and rules of this demo program with you. Go on the EMEA VAD Resource Center to find more info and the details to access the webcast.

    Read the article

  • Storage Technology for the Home User

    <b>Linux Magazine:</b> "Sometimes you just have to get excited about what you can buy, hold in your hand, and use in your home machines. Let's look at some cool storage technology that the average desktop user can tackle."

    Read the article

  • ??????Oracle Automatic Storage Management???·????????

    - by Yusuke.Yamamoto
    ????? ??:2010/03/01 ??:???? Oracle Database ?????? Automatic Storage Management(ASM) ? Oracle Database 10g ?????????Oracle ASM ? Oracle Database ?????????????·????????????·????????????????????????????????????????Oracle ASM ????????????????·?????????????????????????? ??????·???????????????·??????????????????????ASRU ???????ASRU ???????????? ????????? ????????????????? http://www.oracle.com/technetwork/jp/database/1005200-oracle-asm-and-tr-321865-ja.pdf

    Read the article

  • Mass-mailing without getting your domain banned

    - by Arpit
    I plan on sending an email to 10k+ email addresses (mostly gmail and yahoo) to announce the launch of my startup's product. I'm planning on using PHPMailer or PHPList to send out the mails. I've never mass-mailed before and had a few basic questions. I've already browsed through some of the other mass-mail threads on this forum but the questions remain hence a new thread. Are newsletters which are sent by so many other organizations sent out in a similar manner - using programs such as PHPMailer or PHPList? When would a GMail or a Yahoo blacklist my domain name - are there any set parameters - 1000 emails in an hour would result in getting banned or some such parameters? If yes, then what sort of settings should I take note of when sending the emails - any format of the email or spread out the 10+ emails over 2 days etc.? If a Gmail or a Yahoo blacklists your domain name - is there any way to get out of the blacklist?

    Read the article

  • Mass Ball-to-Ball Collision Handling (as in, lots of balls)

    - by BlueThen
    Update: Found out that I was using the radius as the diameter, which was why the mtd was overcompensating. Hi, StackOverflow. I've written a Processing program awhile back simulating ball physics. Basically, I have a large number of balls (1000), with gravity turned on. Detection works great, but my issue is that they start acting weird when they're bouncing against other balls in all directions. I'm pretty confident this involves the handling. For the most part, I'm using Jay Conrod's code. One part that's different is if (distance > 1.0) return; which I've changed to if (distance < 1.0) return; because the collision wasn't even being performed with the first bit of code, I'm guessing that's a typo. The balls overlap when I use his code, which isn't what I was looking for. My attempt to fix it was to move the balls to the edge of each other: float angle = atan2(y - collider.y, x - collider.x); float distance = dist(x,y, balls[ID2].x,balls[ID2].y); x = collider.x + radius * cos(angle); y = collider.y + radius * sin(angle); This isn't correct, I'm pretty sure of that. I tried the correction algorithm in the previous ball-to-ball topic: // get the mtd Vector2d delta = (position.subtract(ball.position)); float d = delta.getLength(); // minimum translation distance to push balls apart after intersecting Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d); // resolve intersection -- // inverse mass quantities float im1 = 1 / getMass(); float im2 = 1 / ball.getMass(); // push-pull them apart based off their mass position = position.add(mtd.multiply(im1 / (im1 + im2))); ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2))); except my version doesn't use vectors, and every ball's weight is 1. The resulting code I get is this: PVector delta = new PVector(collider.x - x, collider.y - y); float d = delta.mag(); PVector mtd = new PVector(delta.x * ((radius + collider.radius - d) / d), delta.y * ((radius + collider.radius - d) / d)); // push-pull apart based on mass x -= mtd.x * 0.5; y -= mtd.y * 0.5; collider.x += mtd.x * 0.5; collider.y += mtd.y * 0.5; This code seems to over-correct collisions. Which doesn't make sense to me because in no other way do I modify the x and y values of each ball, other than this. Some other part of my code could be wrong, but I don't know. Here's the snippet of the entire ball-to-ball collision handling I'm using: if (alreadyCollided.contains(new Integer(ID2))) // if the ball has already collided with this, then we don't need to reperform the collision algorithm return; Ball collider = (Ball) objects.get(ID2); PVector collision = new PVector(x - collider.x, y - collider.y); float distance = collision.mag(); if (distance == 0) { collision = new PVector(1,0); distance = 1; } if (distance < 1) return; PVector velocity = new PVector(vx,vy); PVector velocity2 = new PVector(collider.vx, collider.vy); collision.div(distance); // normalize the distance float aci = velocity.dot(collision); float bci = velocity2.dot(collision); float acf = bci; float bcf = aci; vx += (acf - aci) * collision.x; vy += (acf - aci) * collision.y; collider.vx += (bcf - bci) * collision.x; collider.vy += (bcf - bci) * collision.y; alreadyCollided.add(new Integer(ID2)); collider.alreadyCollided.add(new Integer(ID)); PVector delta = new PVector(collider.x - x, collider.y - y); float d = delta.mag(); PVector mtd = new PVector(delta.x * ((radius + collider.radius - d) / d), delta.y * ((radius + collider.radius - d) / d)); // push-pull apart based on mass x -= mtd.x * 0.2; y -= mtd.y * 0.2; collider.x += mtd.x * 0.2; collider.y += mtd.y * 0.2; Thanks. (Apologies for lack of sources, stackoverflow thinks I'm a spammer)

    Read the article

  • How to organise storage for media content such as video and music?

    - by thor
    Currently, we have a single server hosting all content: music, video and software. This content is downloaded by users through HTTP. Now free space is coming to an end and we are exploring different ways of extending our storage capacity. We want to do it cheap, simple and reliable (protected from disk/ server faults). Currenly, we see two ways: Add a couple of cheap servers with 4 disks (RAID1 ?), run some distributed file-system on top, like GlusterFS. Pros: hopefully, we will see all our disks as single flat file system, just dump content into it and be done. Cons: could be tricky in configuration and handling of faults. Add a couple of cheap servers, all running HTTP servers. Each piece of content (be it a music file or video) is placed on randomly selected two servers. Pros: don't have to deal with RAID, as content is duplicated; single server failure does not bring down any part of content; doubled distribution capacity (as any signle file could be downloaded from any of two servers hosting it). Cons: requires some scripting on part of distribution of content, adding/ removing servers. Do we miss any other ways? Which of the aforementioned options seems to be the best?

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Sending mass email using PHP

    - by Alan
    I am currently writing a music blog. The administrator posts a new article every 2-3 days. Once the administrator posts an article, a mass email will be sent to around 5000 subscribers immediately. What is the best way to implement the mass mail feature? Does the following function work? function massmail() { $content = '...'; foreach ($recipients as $r) { $_content = $content . '<img src="http://xxx/trackOpenRate.php?id='.$r.'">'; mail($r, 'subject', $_content); } } Another question: If all 5000 subscribers are using Yahoo Mail, will Yahoo treat it as a DDOS attack and block the IP address of my SMTP server?

    Read the article

  • Noob with git repository on Windows Storage Server 2008?

    - by HibbyHoo
    I have a Western Digital Sentinel at home running Windows Storage Server 2008 R2 Essentials. I have several git repositories on it for my own personal projects, and have no problem pushing and pulling over my local network. I want to be able to access those repos remotely from anywhere. I am able to log in and remotely access folders and files on it, but I cannot clone repos using the same address. It hangs for a REALLY long time before finally failing with an error: git.exe clone --progress -v "https://myIpAddressHere/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git" "D:\repo" Cloning into 'D:\repo'... error: Failed connect to myIpAddress:443; No error while accessing https://myIpAddress/Remote/fs/files.aspx?path=%5C%5Cmydevicename%5Cmyreposfolder%5Cmyrepo.git/info/refs fatal: HTTP request failed git did not exit cleanly (exit code 128) I'm not too privy to networking or web development, and I have only a rudimentary understanding of how to use git (with TortoiseGit). I'm having a hard time finding search results for this specific problem and a hard time interpreting generic tutorials for the general scope of this problem. TortoiseGit version: 1.7.13.0. git version: 1.7.10.mysysgit.1.

    Read the article

  • New Project Starting. Got Gas?

    - by merrillaldrich
    “Storage is just like gasoline,” said a fellow DBA at the office the other day. This DBA, Mike is his name, is one of the smartest people I know, so I pressed him, in my subtle and erudite way, to elaborate. “Um, whut?” I said. “Yeah. Now that everything is shared – VMs or consolidated SQL Servers and shared storage – if you want to do a big project, like, say, drive to Vegas, you better fill the car with gas. Drive back and forth to work every day? Gas. Same for storage.” This was a light-bulb-above-my-head...(read more)

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Mounting a Mail Store that is in a Recovery Storage group On Exchange 2003

    - by Kyle Brandt
    If I have a production server with the Mail Store Foo in both the storage group companyName and the Recovery Storage Group, is it okay to Mount Foo in the RSG while it is mounted in companyName so I can extract some mailboxs from the recovery storage group? Basically I am wonder if it is okay to mount it in both Production and the Recovery Storage Groups while the mail server is in production and the particular mail store is in production. Reference: "Once an RSG is restored into and mounted up you can connect to it with ExMerge and read out mailboxes into PST files for merging back into a 'live' store" -- http://serverfault.com/questions/49728/test-restore-of-exchange-dbs-with-the-ms-exchange-plugin-of-netbackup-6

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • Implementing Circle Physics in Java

    - by Shijima
    I am working on a simple physics based game where 2 balls bounce off each other. I am following a tutorial, 2-Dimensional Elastic Collisions Without Trigonometry, for the collision reactions. I am using Vector2 from the LIBGDX library to handle vectors. I am a bit confused on how to implement step 6 in Java from the tutorial. Below is my current code, please note that the code strictly follows the tutorial and there are redundant pieces of code which I plan to refactor later. Note: refrences to this refer to ball 1, and ball refers to ball 2. /* * Step 1 * * Find the Normal, Unit Normal and Unit Tangential vectors */ Vector2 n = new Vector2(this.position[0] - ball.position[0], this.position[1] - ball.position[1]); Vector2 un = n.normalize(); Vector2 ut = new Vector2(-un.y, un.x); /* * Step 2 * * Create the initial (before collision) velocity vectors */ Vector2 v1 = this.velocity; Vector2 v2 = ball.velocity; /* * Step 3 * * Resolve the velocity vectors into normal and tangential components */ float v1n = un.dot(v1); float v1t = ut.dot(v1); float v2n = un.dot(v2); float v2t = ut.dot(v2); /* * Step 4 * * Find the new tangential Velocities after collision */ float v1tPrime = v1t; float v2tPrime = v2t; /* * Step 5 * * Find the new normal velocities */ float v1nPrime = v1n * (this.mass - ball.mass) + (2 * ball.mass * v2n) / (this.mass + ball.mass); float v2nPrime = v2n * (ball.mass - this.mass) + (2 * this.mass * v1n) / (this.mass + ball.mass); /* * Step 6 * * Convert the scalar normal and tangential velocities into vectors??? */

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • How can I set up a dual-site Storage Daemon in Bacula (mirror the backup)

    - by Andy
    On site A, I have sucessfully set up a bacula director on one host, several File Daemons on the hosts I want to backup, and finally one Storage Daemon where the backup actually is stored. If disaster struck the building Site A, I want a second Storage Daemon on another site, Site B. The Filesets, Director etc would be the same, except the jobs will be stored on the other Storage Daemon as well. Are there any best practises on this?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >