Search Results

Search found 777 results on 32 pages for 'volumes'.

Page 12/32 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • Excel or OpenOffice Table Summary: how to reconstruct a table from another, with "missing" values

    - by Gilberto
    I have a table of values (partial) with 3 columns: month (from 1 to 12), code and value. E.g., MONTH | CODE | VALUE 1 | aaa | 111 1 | bbb | 222 1 | ccc | 333 2 | aaa | 1111 2 | ccc | 2222 The codes are clients and the values are sales volumes. Each row represents the sales for one month for one client. So I have three clients, namely aaa, bbb, and ccc. For month=1 their sales volumes are: aaa-111, bbb-222, and ccc-333. A client may or may not have sales for every month; for example, for the month 2, the client bbb has no sales. I have to construct a completed summary table for all the MONTH / CODE pairs with their corresponding VALUE (using the value from the "partial" table, if present, otherwise print a string "missing"). MONTH | CODE | VALUE 1 | aaa | 111 1 | bbb | 222 1 | ccc | 333 2 | aaa | 1111 2 | bbb | missing 2 | ccc | 2222 Or, to put it another way, the table is a linear representation of a matrix:                                 and I want to identify the cells for which no value was provided. How can I do that?

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • P2V Wouldn't Boot, Rebuilt initrd, Need to Clean Up

    - by Mike Soule
    We have a CentOS 5.4 server (build 2.6.18-164.el5xen). We went to P2V this server so we can have redundancy, the physical only has one PSU. The P2V only completed 99% of the way, we have a VMWare ticket opened, but they marked the ticket as low priority. I was able to boot into a rescue disc of Red Hat 5.4 and rebuild the initrd with the help of this blog post. Now the only issue is the original server had a modified initrd, which was also from a different OS build and made by an outside provider. We do not have a document outlining modifications. My question is, is it at all possible to copy the initrd off of the physical server and replace it on the virtual and some how have the virtual machine boot? Thanks for any input. Edit: I copied the initrd img from the physical and it recreated the original issue. Here is a screen capture of the error. http://i.imgur.com/MqC73.jpg Edit2: echo Scanning logical volumes lvm vgscan --ignorelockingfailure echo Activating logical volumes lvm vgchange -ay --ignorelockingfailure VolGroup00 resume /dev/VolGroup00/LogVol01 echo Creating root device. mkrootdev -t ext3 -o defaults,ro /dev/VolGroup00/LogVol00 echo Mounting root filesystem. mount /sysroot

    Read the article

  • Bacula stops writing to disk volume after 2GB

    - by m.list
    Bacula Version: 5.2.5 I have configured bacula to write volumes to disk, however bacula stops writing to the volume as soon as it reaches 2gb. The file system is not an issue as I have stored files larger than 2gb. 06-Dec 17:22 backup-sd JobId 8421: End of Volume "Full-Monthly-0005" at 0:2147475577 on device "FileStorage" (/nfs/backup-pool). Write of 64512 bytes got 8069. 06-Dec 17:22 backup-sd JobId 8421: End of medium on Volume "Full-Monthly-0005" Bytes=2,147,475,578 Blocks=33,288 at 06-Dec-2012 17:22. backup1@backup:/nfs/backup-pool$ ls -alh Full-Monthly-0005 <br> -rw-r----- 1 bacula tape 2.0G Dec 3 16:14 Full-Monthly-0005 bacula-dir.conf: Pool { Name = Full-Monthly Pool Type = Backup Recycle = yes Volume Retention = 5 months Volume Use Duration = 1 day Maximum Volumes = 5 Maximum Volume Bytes = 12gb } bacula-sd.conf: Device { Name = FileStorage Media Type = File Archive Device = /nfs/backup-pool LabelMedia = yes # lets Bacula label unlabeled media Random Access = Yes RemovableMedia = no AlwaysOpen = no Label media = yes Maximum Volume Size = 12gb } In my original configuration Maximum Volume Bytes and Maximum Volume Size were not set at all and so should have defauted to no maximum but that did not work either.

    Read the article

  • Understanding where an amazon ec2 instance run?

    - by kenzo450D
    I am currently using the aws api from my local desktop. I can successfully take backups of my amazon volumes, and even create an ami from it. Now when i wanted to run the instance to be built from this ami, where does the instance run? In their Elastic Cloud or the computer from which the command was issued. Suppose I want to create the new instance in a new region? (locations as defined in ec2-describe-regions) How would I do that? It seems i have a bad knowledge about how the relation between amazon volumes and instances? Please explain it. I am only allowed to use the CLI tools to do all of my work. I made a new snapshot of the existing instance, made an ami using ec2-register, made a keypair, and then followed these steps, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-an-instance.html#launching-an-instance-cli but i got an error as this Client.InvalidParameterValue: The requested instance type's architecture (i386) does not match the architecture in the manifest for aki-fc37bacc (x86_64) my local computer is 32bit. But I do not want to load instance on the local computer but on amazon servers?

    Read the article

  • Where's my free space gone? (on my mac) [closed]

    - by Cawas
    Possible Duplicate: Something’s slowly eating my HD space Somehow part of my files from my USB disk, 40GB of it (exactly the space I was missing), were copied to /Volumes/ and when I mounted the disk it was called "600GB Disk 2", while "600GB Disk" was filled with duplicated data. All that happened at once, slowly, just this morning when I turned on my macbook. I could notice that thanks to Disk Inventory X. I could actually see those on GrandPerspective, but I thought it was just scanning my USB limited to that folder for whatever reason. On Disk Inventory I could see the /Volumes/ listing one folder as a folder and the second one as a link, like it should be. Well, looking at that folder, I quickly associated what was in it with my scheduled backup on Carbon Copy Cloner. I'm still not sure why the USB disk was mounted with wrong name, but what happened was CCC store the full path information of the source and destination, so when it tried to do the schedule backup it created the path that didn't exist, and copied everything there - while it should be copying into the mounted volume. While this is solved this time, what else could I have done to diagnose this kind of issue, for the next time?

    Read the article

  • mdadm lvm and ext4 slowness - How can I speed it up?

    - by beatbreaker
    I can't figure out why I'm getting such terrible times out of my mdadm and in particular the lvm partitions in it. I made the raid: mdadm --create --verbose /dev/md0 --level=5 --chunk=1024 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1] 2930279424 blocks level 5, 1024k chunk, algorithm 2 [4/4] [UUUU] I then created the physical volume, volume group, and logical volumes, I then formatted the logical volumes to ext4 using the following commands I got from here: http://busybox.net/~aldot/mkfs_stride.html mkfs.ext3 -b 4096 -E stride=256,stripe-width=768 /dev/datavg/blah Now I'm confused, I had these lvs running real quick before in mdadm but now that I've 'optimized' everything it's slower, eg, before: /dev/datavg/lv_audio: Timing buffered disk reads: 598 MB in 3.01 seconds = 198.85 MB/sec but now after: /dev/datavg/audio: Timing buffered disk reads: 198 MB in 3.00 seconds = 65.96 MB/sec That's pitiful! What's happened here? Did I not follow the instructions correctly? Can i reshape the ext4 partitons to default back to what they were? (I used defaults before and they were fine!)

    Read the article

  • Resque Runtime Error at /workers: wrong number of arguments for 'exists' command

    - by Superflux
    I'm having a runtime errror when i'm looking at the "workers" tab on resque-web (localhost). Everything else works. Edit: when this error occurs, i also have some (3 or 4) unknown workers 'not working'. I think they are responsible for the error but i don't understand how they got here Can you help me on this ? Did i do something wrong ? Config: Resque 1.8.5 as a gem in a rails 2.3.8 app on Snow Leopard redis 1.0.7 / rack 1.1 / sinatra 1.0 / vegas 0.1.7 file: client.rb location: format_error_reply line: 558 BACKTRACE: * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_error_reply * 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end 556. 557. def format_error_reply(line) 558. raise "-" + line.strip 559. end 560. 561. def format_status_reply(line) 562. line.strip 563. end 564. 565. def format_integer_reply(line) * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in format_reply * 541. 542. def reconnect 543. disconnect && connect_to_server 544. end 545. 546. def format_reply(reply_type, line) 547. case reply_type 548. when MINUS then format_error_reply(line) 549. when PLUS then format_status_reply(line) 550. when COLON then format_integer_reply(line) 551. when DOLLAR then format_bulk_reply(line) 552. when ASTERISK then format_multi_bulk_reply(line) 553. else raise ProtocolError.new(reply_type) 554. end 555. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in read_reply * 478. disconnect 479. 480. raise Errno::EAGAIN, "Timeout reading from the socket" 481. end 482. 483. raise Errno::ECONNRESET, "Connection lost" unless reply_type 484. 485. format_reply(reply_type, @sock.gets) 486. end 487. 488. 489. if "".respond_to?(:bytesize) 490. def get_size(string) 491. string.bytesize 492. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in map * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in process_command * 446. end 447. 448. return pipeline ? results : results[0] 449. end 450. 451. def process_command(command, argvv) 452. @sock.write(command) 453. argvv.map do |argv| 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end 447. 448. return pipeline ? results : results[0] 449. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in synchronize * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in maybe_lock * 454. processor = REPLY_PROCESSOR[argv[0].to_s] 455. processor ? processor.call(read_reply) : read_reply 456. end 457. end 458. 459. def maybe_lock(&block) 460. if @thread_safe 461. @mutex.synchronize(&block) 462. else 463. block.call 464. end 465. end 466. 467. def read_reply 468. * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in raw_call_command * 432. end 433. # When in Pub/Sub mode we don't read replies synchronously. 434. if @pubsub 435. @sock.write(command) 436. return true 437. end 438. # The normal command execution is reading and processing the reply. 439. results = maybe_lock do 440. begin 441. set_socket_timeout!(0) if requires_timeout_reset?(argvv[0][0].to_s) 442. process_command(command, argvv) 443. ensure 444. set_socket_timeout!(@timeout) if requires_timeout_reset?(argvv[0][0].to_s) 445. end 446. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in call_command * 336. # try to reconnect just one time, otherwise let the error araise. 337. def call_command(argv) 338. log(argv.inspect, :debug) 339. 340. connect_to_server unless connected? 341. 342. begin 343. raw_call_command(argv.dup) 344. rescue Errno::ECONNRESET, Errno::EPIPE, Errno::ECONNABORTED 345. if reconnect 346. raw_call_command(argv.dup) 347. else 348. raise Errno::ECONNRESET 349. end 350. end * /Library/Ruby/Gems/1.8/gems/redis-1.0.7/lib/redis/client.rb in method_missing * 385. connect_to(@host, @port) 386. call_command([:auth, @password]) if @password 387. call_command([:select, @db]) if @db != 0 388. @sock 389. end 390. 391. def method_missing(*argv) 392. call_command(argv) 393. end 394. 395. def raw_call_command(argvp) 396. if argvp[0].is_a?(Array) 397. argvv = argvp 398. pipeline = true 399. else * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in send * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/redis-namespace-0.4.4/lib/redis/namespace.rb in method_missing * 159. args = add_namespace(args) 160. args.push(last) if last 161. when :alternate 162. args = [ add_namespace(Hash[*args]) ] 163. end 164. 165. # Dispatch the command to Redis and store the result. 166. result = @redis.send(command, *args, &block) 167. 168. # Remove the namespace from results that are keys. 169. result = rem_namespace(result) if after == :all 170. 171. result 172. end 173. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/worker.rb in state * 416. def idle? 417. state == :idle 418. end 419. 420. # Returns a symbol representing the current worker state, 421. # which can be either :working or :idle 422. def state 423. redis.exists("worker:#{self}") ? :working : :idle 424. end 425. 426. # Is this worker the same as another worker? 427. def ==(other) 428. to_s == other.to_s 429. end 430. * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * 46. <tr> 47. <th>&nbsp;</th> 48. <th>Where</th> 49. <th>Queues</th> 50. <th>Processing</th> 51. </tr> 52. <% for worker in (workers = resque.workers.sort_by { |w| w.to_s }) %> 53. <tr class="<%=state = worker.state%>"> 54. <td class='icon'><img src="<%=u state %>.png" alt="<%= state %>" title="<%= state %>"></td> 55. 56. <% host, pid, queues = worker.to_s.split(':') %> 57. <td class='where'><a href="<%=u "workers/#{worker}"%>"><%= host %>:<%= pid %></a></td> 58. <td class='queues'><%= queues.split(',').map { |q| '<a class="queue-tag" href="' + u("/queues/#{q}") + '">' + q + '</a>'}.join('') %></td> 59. 60. <td class='process'> * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in each * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server/views/workers.erb in __tilt_a2112543c5200dbe0635da5124b47311 * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in send * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in evaluate * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/tilt.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in render * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in erb * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in show * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/lib/resque/server.rb in GET /workers * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in each * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in route! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in dispatch! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in instance_eval * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in catch * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in invoke * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call! * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in synchronize * /Library/Ruby/Gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/content_length.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/chunked.rb in call * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in process * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in each * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in process_client * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in initialize * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in new * /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.1.5/lib/mongrel.rb in run * /Volumes/Donnees/Users/**/.gem/ruby/1.8/gems/rack-1.1.0/lib/rack/handler/mongrel.rb in run * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in run! * /Library/Ruby/Gems/1.8/gems/vegas-0.1.7/lib/vegas/runner.rb in start * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in new * /Library/Ruby/Gems/1.8/gems/resque-1.8.5/bin/resque-web in nil * /usr/bin/resque-web in load

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • PENGUIN IS GETTING READY FOR ORACLE OPENWORLD 2012

    - by Zeynep Koch
    Are you looking for reasons to attend Oracle Openworld, how about below Oracle Linux sessions and hands-on-labs.  1. General Session: Oracle Linux Strategy and Roadmap  In this session, Oracle executives will discuss Linux strategy; the roadmap; contributions to the Linux mainline kernel; and what's in store for upcoming releases of Oracle Linux and the Unbreakable Enterprise Kernel. Don’t miss this session. 2. New Features in Oracle Linux- A Technical Deep Dive Collaborating with the Linux community, Oracle engineers contribute to advancing Linux for mission-critical deployments. In this technical session, attendees will learn about the recent developments in Oracle Linux and the Unbreakable Enterprise Kernel 3. Why Switch to Oracle Linux?  Oracle is the only company that provides a complete Linux solution from applications to disk, fully optimized for Oracle hardware and software, with one-stop support. In this session you will hear from two customers that have successfully implemented Oracle Linux and saved 50 to 90 percent on Linux support costs as well as the reasons to switch to Oracle Linux. 4. Debugging and Configuration Best Practices for Oracle Linux This is one of our best attended sessions and most informative. In this best practices session, learn how to save time and money while preventing headaches and hassles. Discover expert secrets to get your Linux systems up and running (and keep them running), avoid common pitfalls, prevent problems, and circumvent known issues. 5. Top Technical Tips for Automatic and Secure Oracle Linux Deployments In this session, attendees will learn about how to easily deploy and install Oracle Linux systems using various technologies like Kickstart, Oracle Enterprise Manager OpsCenter, and Oracle VM Templates for applications on Linux. Additionally, the session will share useful Linux security tips and introduce utilities to help with hardening and securely operating an Oracle Linux system. We also have a great session in Oracle Develop track: 6. DTrace for Oracle Linux Initially announced at last year's Oracle Openworld, DTrace for Oracle Linux is now available for the Unbreakable Enterprise Kernel R.2. In this session held by one of the engineers working on the DTrace for Linux port, you will learn how you can use this powerful and flexible framework in your development environment. If you prefer to really have practical experience, don’t miss our two Hands-on-Labs where we will cover: HOL-1 : Oracle Linux Package Management: Configuring and Enabling Services In this session you will be Installing and configuring Oracle VM VirtualBox, importing the Oracle Linux virtual appliance. You will then use the package management on Oracle Linux using RPM and yum. You will also be able to review Ksplice, zero downtime kernel updates that enable you to apply security updates, patches and critical bug fixes without rebooting. HOL-2: Oracle Linux Storage Management with LVM and Device Mapper In this session you will learn about storage management with LVM2, the Linux Logical Volume Manager, Btrfs, preparing block devices, creating physical and logical volumes, creating file systems on top of logical volumes, and resizing file systems dynamically. You will also practice setting up software RAID devices, configuring encrypted block devices. You will also see Oracle Linux and Kpslice in the three demopods we will feature at Exhibition demogrounds. One in MySQL Connect and two in Oracle Openworld. What more do you need to come to San Francisco? Oh, I forgot to mention we also have great weather in fall.. Check out the Content Catalog and register to attend Oracle Linux sessions.

    Read the article

  • PHPUnit installed but class PHPUnit_TestCase not found

    - by Greg K
    Talk about falling at the first hurdle. My test script: <?php require_once('PHPUnit/Framework.php'); class TransferResponseTest extends PHPUnit_TestCase { ... } Running my test case: $ phpunit TransferResponseTest Fatal error: Class 'PHPUnit_TestCase' not found in /Volumes/Data/greg/code/syndicate/tests/TransferResponseTest.php on line 5 $ php -i | grep include_path include_path => .:/usr/lib/php => .:/usr/lib/php $ ls -l /usr/lib/php/PHPUnit/ total 8 drwxr-xr-x 16 root wheel 544 27 Mar 19:03 Extensions drwxr-xr-x 28 root wheel 952 27 Mar 19:03 Framework -rw-r--r-- 1 root wheel 3193 27 Mar 19:03 Framework.php drwxr-xr-x 8 root wheel 272 27 Mar 19:03 Runner drwxr-xr-x 5 root wheel 170 27 Mar 19:03 TextUI drwxr-xr-x 32 root wheel 1088 27 Mar 19:03 Util I copied /etc/php.ini-default to /etc/php.ini and explicitly specified the include path as /usr/lib/php/ with an end / but still no success. $ php -i | grep include_path include_path => .:/usr/lib/php/ => .:/usr/lib/php/ $ phpunit TransferResponseTest.php PHP Fatal error: Class 'PHPUnit_TestCase' not found in /Volumes/Data/greg/code/syndicate/tests/TransferResponseTest.php on line 5 $ phpunit --version PHPUnit 3.4.11 by Sebastian Bergmann. Any ideas?

    Read the article

  • Bash scripting problem

    - by komidore64
    I'm writing a bash script to sync my iTunes music directory to a directory on a removable hard drive. The script works fine when there is absolutely nothing in the folder on the external hard drive. Once all files have been copied to the external drive, then the script begins to act strange. Even though i just sync'd everything over, it proceeds to recopy certain files again. After the initial sync, it chooses the same files to resync each consecutive time the script is executed without any changes being made to the source directory. #!/bin/bash # shell script to sync music with gigabeat and/or firewire drive musicdir="/Users/komidore64/Music/iTunes/iTunes Media/Music" gigadir="/Volumes/GIGABEAT/music" # fwdir="/Volumes/" remove() { find "$1" \ ! \( -name "*.wav" \ -o -name "*.ogg" \ -o -name "*.flac" \ -o -name "*.aac" \ -o -name "*.mp3" \ -o -name "*.m4a" \ -o -name "*.wma" \ -o -name "*.m4p" \ -o -name "*.ape" \ -o -type d \) \ -exec rm -i {} \; } if [ $# == 0 ]; then echo "no device argument present" echo "specify '-g' for gigabeat" echo "or '-f' for firewire drive" else remove "$musicdir" while [ $1 ]; do case $1 in -g | --gigabeat ) rsync --archive --verbose --delete "$musicdir/" "$gigadir" ;; -f | --firewire ) rsync --archive --verbose --delete "$musicdir/" "$fwdir" esac shift done echo "music synced" fi

    Read the article

  • Comparing values from a string in a MySQL query

    - by bellesebastien
    I'm having some trouble comparing values found in VARCHAR fields. I have a table with products and each product has volume. I store the volume in a VARCHAR field and it's usually a number (30, 40, 200..) but there are products that have multiple volumes and their data is stored separated by semicolons, like so 30;60;80. I know that storing multiple volumes like that is not recommended but I have to work with it like it is. I'm trying to implement a search by volume function for the products. I want to also display the products that have a bigger or equal volume than the one searched. This is not a problem with the products that have a single volume, but it is a problem with the multiple volume products. Maybe an example will make things clearer: Let's say I have a product with this in it's volume field: 30;40;70;80. If someone searched for a volume, lets say 50, I want that product to be displayed. To do this I was thinking of writing my own custom MySQL function (I've never this before) but maybe someone can offer a different solution. I apologize for my poor English but I hope I made my question clear. Thanks.

    Read the article

  • Getting entitlement warning while building an Ad Hoc Distribution Bundle for an Iphone App.

    - by nefsu
    I followed Apple's instructions on how to create an Ad Hoc Distrubution bundle but I keep getting what appears to be a fatal Warning during the build process. As per the instructions, I set the signing identity to my distribution profile at the target (instead of the project), created my Entitlement.plist file and unchecked get-task-allow, linked this file to my target and run the build in distribution for device mode. When I do that, the build completes successful but only after giving the following warning. [WARN]CodeSign warning: entitlements are not applicable for product type 'Application' in SDK 'Device - iPhone OS 3.1.2'; ignoring... The last step in the build is the CodeSign and I've noticed that although it ran without errors, it's missing the --entitlement command line option that is given on the official apple instruction guide. Here is my CodeSign line /usr/bin/codesign -f -s "iPhone Distribution: My Name" --resource-rules=/Volumes/Data/projects/xcode/MyAppName/build/Distribution-iphoneos/MyAppName.app/ResourceRules.plist /Volumes/Data/projects/xcode/MyAppName/build/Distribution-iphoneos/MyAppName.app And here is apple's screen shot of what's expected. Can someone please help me figure out if this is something I'm doing wrong because much to my dismay even the dev forum at apple has very little information on this CodeSign warning.

    Read the article

  • Send HTTPService Request in flex 3 with '-' in the URl Paramerters to get Google Feeds

    - by Goysar
    I am developing application in flex 3 which interacts with the Google feeds to produce my results. The URL to which i want to send request is something like this http://books.google.com/books/feeds/volumes?q=football+-soccer&start-index=11&max-results=10 Now i can send and receive results with q parameter, but in the next two parameters has a '-' (start-index and max-results). I am using HTTPService to send the requeset like this. SearchService.url = "http://books.google.com/books/feeds/volumes"; SearchService.method = "GET"; SearchService.contentType = "application/x-www-form-urlencoded" Here SearchService is the HTTPService var params:Object = new Object(); params.q = searchText; params.start-index = 11; params.max-results = 100; service.SearchService.send(params); Now my flex IDE throws me a error stating 'Cannot assign a non-reference value'. Only if i send the request with this parameters, i could put pagination in my application. So how can i send HTTPService request with '-' in the URL parameters?

    Read the article

  • Snow Leopard mounted directory changes permissions sporadically

    - by Galen
    I have a smb mounted directory: /Volumes/myshare This was mounted via Finder "Connect to Server..." with smb://myservername/myshare Everything good so far. However, when I try to access the directory via PHP (running under Apache), it fails with permission denied about 10% of the time. By this I mean that repeated accesses to my page sometimes result in a failure. My PHP page looks like: <?php $cmd = "ls -la /Volumes/ 2>&1"; exec($cmd, $execOut, $exitCode); echo "<PRE>EXIT CODE = $exitCode<BR/>"; foreach($execOut as $line) { echo "$line <BR/>"; } echo "</PRE>"; ?> When it succeeds it looks like: EXIT CODE = 0 total 40 drwxrwxrwt@ 4 root admin 136 Jun 14 12:34 . drwxrwxr-t 30 root admin 1088 Jun 4 13:09 .. drwx------ 1 galen staff 16384 Jun 14 09:28 myshare lrwxr-xr-x 1 root admin 1 Jun 11 16:05 galenhd - / When it fails it looks like: EXIT CODE = 1 ls: myshare: Permission denied total 8 drwxrwxrwt@ 4 root admin 136 Jun 14 12:34 . drwxrwxr-t 30 root admin 1088 Jun 4 13:09 .. lrwxr-xr-x 1 root admin 1 Jun 11 16:05 galenhd - / OTHER INFO: I'm working with the PHP (5.3.1), and Apache server that comes out of the box with Snow Leopard. Also, if I write a PHP script that loops and retries the "ls -la.." from the command-line, it doesn't seem to fail. Nothing is changing about the code and/or filesystem between succeeds and fails, so this appears to be a truly intermittent failure. This is driving me crazy. Anyone have any idea what might be going on? Thanks, Galen

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • Total Average Week using a Parameter

    - by Jose
    I have a crystal report that shows sales volumes called week to date volume. It shows current week, previous week, and average week. The report prompts for a date parameter and I extract the week number to get current week and previous week volumes. Did it this way because Mngmt wants to be able to run report whenever. My problem is for Average Week I cant figure out how to get the number of weeks to divide by for my average. Report originates from June 1st, 2010. Right now I have: DATEPART("ww", {?date}) - DATEPART("ww", DATE(2010, 6, 1)) This returns 2 right now which is perfect, so i divide my total by 2. This code will work until the end of the year then I'm hooped. Any idea how I can make this a little more dynamic. I was thinking a counter somehow, just can't get the logic down because the date parameter will keep changing, meaning I cant increase my counter by 1 after each week??? Cheers.

    Read the article

  • Can I set up a separate RAID 5 devices?

    - by GregH
    I have a GIGABYTE GA-P55M-UD2 motherboard in my system (Windows 7 64 bit) and currently have a Western Digital SATA 10k RPM drive in it. Is it possible to add three more SATA drives to make a single RAID 5 device that is separate from the current 10k rpm drive that I have in it? Once I enable RAID for the SATA drives, I am not required to make all of the drives part of the RAID volume am I? I can have separate RAID'ed and non-RAID'ed volumes can't I?

    Read the article

  • ESXi 5.1 ghettoVCB stuck at Clone: 10% done

    - by stormdrain
    Trying to run ghettoVCB for the first time here. I am using a NAS that is set up as a datastore on the host. I did a dry run and it completed without error. The VM is ~500GB and there is only one on the host that I'm trying to backup. I proceeded to start the actual backup: ./ghettoVCB.sh -m vmname -g ghettoVCB.conf It goes though the config and looks like it's taking off: 2013-10-24 11:43:19 -- info: CONFIG - USING GLOBAL GHETTOVCB CONFIGURATION FILE = ghettoVCB.conf 2013-10-24 11:43:19 -- info: CONFIG - VERSION = 2013_01_11_0 2013-10-24 11:43:19 -- info: CONFIG - GHETTOVCB_PID = 17398616 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nas2tb-001/esxi4 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2013-10-24 11:43:19 -- info: CONFIG - VM_BACKUP_DIR_NAMING_CONVENTION = 2013-10-24_11-43-18 2013-10-24 11:43:19 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2013-10-24 11:43:19 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2013-10-24 11:43:19 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 4 2013-10-24 11:43:19 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2013-10-24 11:43:19 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2013-10-24 11:43:19 -- info: CONFIG - LOG_LEVEL = info 2013-10-24 11:43:19 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB-2013-10-24_11-43-18-17398616.log 2013-10-24 11:43:19 -- info: CONFIG - ENABLE_COMPRESSION = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2013-10-24 11:43:19 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2013-10-24 11:43:19 -- info: CONFIG - ALLOW_VMS_WITH_SNAPSHOTS_TO_BE_BACKEDUP = 0 2013-10-24 11:43:19 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2013-10-24 11:43:19 -- info: CONFIG - VM_SHUTDOWN_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - VM_STARTUP_ORDER = 2013-10-24 11:43:19 -- info: CONFIG - EMAIL_LOG = 0 2013-10-24 11:43:19 -- info: 2013-10-24 11:43:22 -- info: Initiate backup for vmname 2013-10-24 11:43:22 -- info: Creating Snapshot "ghettoVCB-snapshot-2013-10-24" for serv2 Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/esxi4-storage/vmname/vmname_1.vmdk'... Clone: 10% done. and it's been that way for over an hour now. Stuck at Clone: 10% done.. Thing is: I can see the vmdk on the NAS. And it looks like almost the whole thing is there. On the NAS it's showing ~430GB but on vSphere Client Summary is shows as 507GB. I don't see the vmdk on the NAS growing any more. The logfile mimics some of the above and is sitting at "Creating Snapshot..." and nothing else is coming in. Is the vmdk on the NAS showing all those GB because of the provisioning or something? i.e. is the size of the file not necessarily indicative of the amount of actual data that has been copied? Is there are reason it might be "Stuck" at 10%? i.e. could it really be taking this long? Any other tips? Thanks. Edit: as soon as I hit the Submit button, I glance over to see that it has incremented to 11% done. Good to know it'll be complete sometime around when the sun explodes.

    Read the article

  • Disc2vhd and partitions question

    - by dotnetdev
    Hi, I am using Disc2Vhd to make a vhd of my drive. Does the tool support partitions? My partitions of C:\ are listed as both drives and volumes. In short, is there any issue when I select both partitions that make one physical drive? Thanks

    Read the article

  • ZFS/Btrfs/LVM2-like storage with advanced features on Linux?

    - by Easter Sunshine
    I have 3 identical internal 7200 RPM SATA hard disk drives on a Linux machine. I'm looking for a storage set-up that will give me all of this: Different data sets (filesystems or subtrees) can have different RAID levels so I can choose performance, space overhead, and risk trade-offs differently for different data sets while having a few number of physical disks (very important data can be 3xRAID1, important data can be 3xRAID5, unimportant reproducible data can be 3xRAID0). If each data set has an explicit size or size limit, then the ability to grow and shrink the size limit (offline if need be) Avoid out-of-kernel modules R/W or read-only COW snapshots. If it's a block-level snapshots, the filesystem should be synced and quiesced during a snapshot. Ability to add physical disks and then grow/redistribute RAID1, RAID5, and RAID0 volumes to take advantage of the new spindle and make sure no spindle is hotter than the rest (e.g., in NetApp, growing a RAID-DP raid group by a few disks will not balance the I/O across them without an explicit redistribution) Not required but nice-to-haves: Transparent compression, per-file or subtree. Even better if, like NetApps, analyzes the data first for compressibility and only compresses compressible data Deduplication that doesn't have huge performance penalties or require obscene amounts of memory (NetApp does scheduled deduplication on weekends, which is good) Resistance to silent data corruption like ZFS (this is not required because I have never seen ZFS report any data corruption on these specific disks) Storage tiering, either automatic (based on caching rules) or user-defined rules (yes, I have all-identical disks now but this will let me add a read/write SSD cache in the future). If it's user-defined rules, these rules should have the ability to promote to SSD on a file level and not a block level. Space-efficient packing of small files I tried ZFS on Linux but the limitations were: Upgrading is additional work because the package is in an external repository and is tied to specific kernel versions; it is not integrated with the package manager Write IOPS does not scale with number of devices in a raidz vdev. Cannot add disks to raidz vdevs Cannot have select data on RAID0 to reduce overhead and improve performance without additional physical disks or giving ZFS a single partition of the disks ext4 on LVM2 looks like an option except I can't tell whether I can shrink, extend, and redistribute onto new spindles RAID-type logical volumes (of course, I can experiment with LVM on a bunch of files). As far as I can tell, it doesn't have any of the nice-to-haves so I was wondering if there is something better out there. I did look at LVM dangers and caveats but then again, no system is perfect.

    Read the article

  • Booting Fedora guest VBox on /dev/mapper/vg0-fc17-root

    - by NevilleDNZ
    I already have the following logical volumes: host:/dev/mapper/vg0-fc17-boot (guestOS:/dev/hdb) formatted as ext4 (no partition table) host:/dev/mapper/vg0-fc17-root (guestOS:/dev/hdc) formatted as ext4 (no partition table) Do I have to create the following grub partition to boot a guest VM under VirtualBox? host:/dev/mapper/vg-fc17-mbr (guestOS:/dev/hda) with a partition table and install grub MBR here? Or is there a better way? (Maybe grub on vg0-fc17-boot?)

    Read the article

  • Bacula backup on Linux, restore on FreeBSD

    - by Martin Orda
    I'd like to migrate my storage server from Linux (Debian) to FreeBSD. Biggest part of this is restoring backups saved on Linux to tape volumes on the newly installed FreeBSD. Unfortunately I don't have anywhere to test this on at the moment but I'd like to ask you if you're aware of any incompatiblities or Bacula specific settings that should be used to make this as smooth of a process as possible.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >