Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 90/222 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Bit/Byte adressing - Little/Big-endnian

    - by code8230
    Consider the 16-Bit data packet below, which is sent through the network in network byte order ie Big Endian: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 (Byte num) 34 67 89 45 90 AB FF 23 65 37 56 C6 56 B7 00 00 (Value) Lets say 8945 is a 16 bit value. All others are 8 bit data bytes. On my system, which is little endian, how would the data be received and stored? Lets say, we are configured to receive 8 bytes at a time. RxBuff is the Rx buffer where data will be received. Buff is the storage buffer where data would be stored. Please point out which case is correct for data storage after reading 8 bytes at a time: 1) Buff[] = {0x34, 0x67, 0x45, 0x89, 0x90, 0xAB....... 0x00}; 2) Buff[] = {0x00, 0x00, .......0x67, 0x89, 0x45, 0x34}; Would the whole 16 bytes data be reversed or only the 2 bytes value contained in this packet?

    Read the article

  • Several C# Language Questions

    - by Water Cooler v2
    1) What is int? Is it any different from the struct System.Int32? I understand that the former is a C# alias (typedef or #define equivalant) for the CLR type System.Int32. Is this understanding correct? 2) When we say: IComparable x = 10; Is that like saying: IComparable x = new System.Int32(); But we can't new a struct, right? or in C like syntax: struct System.In32 *x; x=>someThing = 10; 3) What is String with a capitalized S? I see in Reflector that it is the sealed String class, which, of course, is a reference type, unlike the System.Int32 above, which is a value type. What is string, with an uncapitalized s, though? Is that also the C# alias for this class? Why can I not see the alias definitions in Reflector? 4) Try to follow me down this subtle train of thought, if you please. We know that a storage location of a particular type can only access properties and members on its interface. That means: Person p = new Customer(); p.Name = "Water Cooler v2"; // legal because as Name is defined on Person. but // illegal without an explicit cast even though the backing // store is a Customer, the storage location is of type // Person, which doesn't support the member/method being // accessed/called. p.GetTotalValueOfOrdersMade(); Now, with that inference, consider this scenario: int i = 10; // obvious System.object defines no member to // store an integer value or any other value in. // So, my question really is, when the integer is // boxed, what is the *type* it is actually boxed to. // In other words, what is the type that forms the // backing store on the heap, for this operation? object x = i; Update Thank you for your answers, Eric Gunnerson and Aaronought. I'm afraid I haven't been able to articulate my questions well enough to attract very satisfying answers. The trouble is, I do know the answers to my questions on the surface, and I am, by no means, a newbie programmer. But I have to admit, a deeper understanding to the intricacies of how a language and its underlying platform/runtime handle storage of types has eluded me for as long as I've been a programmer, even though I write correct code.

    Read the article

  • How to Create a Virtual Windows Drive

    - by HyLian
    Hello, I'm trying to create a Windows Virtual Drive ( like c:\ ) to map a remote storage. The main purpose is to do it in a clear way to the user. Therefore the user wouldn't know that he is writing/reading from another site. I was searching for available products, and i find that FUSE is not an option in Windows and WebDAV maps directly the drive, and i would like to build a middle layer between windows and remote storage to implement some kind of services. Another alternatives exists, such as Dokan, that is very expensive, and System.IO.IsolatedStorage Namespace, that doesn't seem to explicity create a new Windows Drive. Probably pismo ( http://www.pismotechnic.com/ ) is the thing that mostly matches my requirements but I would know if there is another alternative, including some Windows ( C++ or .NET ) native API to do that. Thanks for reading :)

    Read the article

  • C: stdin and std* errs

    - by user355926
    I want to my manipulate Stdin, then Std* but some errs: $ gcc testFd.c testFd.c:9: error: initializer element is not constant testFd.c:9: warning: data definition has no type or storage class testFd.c:10: error: redefinition of `fd' testFd.c:9: error: `fd' previously defined here testFd.c:10: error: `mode' undeclared here (not in a function) testFd.c:10: error: initializer element is not constant testFd.c:10: warning: data definition has no type or storage class testFd.c:12: error: syntax error before string constant $ cat testFd.c #include <stdio.h> #include <sys/ioctl.h> int STDIN_FILENO = 1; // I want to access typed // Shell commands, dunno about the value: unsigned long F_DUPFD; fd = fcntl(STDIN_FILENO, F_DUPFD, 0); fd = open("/dev/fd/0", mode); printf("STDIN = %s", fd);

    Read the article

  • Have a workaround for the 1,000 file limit in a directory on windows mobile 5?

    - by nateday76
    I need to download more than 1,000 files into a windows mobile 5 directory located on the storage card. If i copy the files onto the storage card via my desktop there is no problem. But when i try to download the files from the handheld device I get a disk full error, even though there is plenty of room due to the 1,000 file limit. Has anyone run into this and found a workaround? I'm going to try zipping all of the files then decompressing on the device but not sure that this will work.

    Read the article

  • Copying to /system - android

    - by user1675783
    I am been trying to copy a apk from assets of another apk to /system. Here is what I have done,it was working in my previous app but not in this.I have added permission for wrtiting external storage. It is successfully copying to internal storage,not not to /system. Is there any way to directly copy to /system? copyStream("y.apk","/sdcard/x.apk"); Process mSuProcess; mSuProcess = Runtime.getRuntime().exec("su"); new DataOutputStream(mSuProcess.getOutputStream()).writeBytes("mount -o remount rw /system"); DataOutputStream mSuDataOutputStream = new DataOutputStream(mSuProcess.getOutputStream()); mSuDataOutputStream.writeBytes("cp /sdcard/x.apk /system/app/x.apk"); mSuDataOutputStream.writeBytes("exit\n");

    Read the article

  • What's causing Remote Access error 807 using rasdial.exe to connect to a PPTP VPN?

    - by Dylan Beattie
    I'm using rasdial.exe to connect an offsite server to our VPN. Remote box is a Windows 2008 x64 server; the VPN host at this end is a Watchguard Firebox x750e running Fireware 10.2 It connects fine about 20-30% of the time. The rest of the time I get: Remote Access error 807 - The network connection between your computer and the VPN server was interrupted. This can be caused by a problem in the VPN transmission and is commonly the result of internet latency or simply that your VPN server has reached capacity. Please try to reconnect to the VPN server. If this problem persists, contact the VPN administrator and analyze quality of network connectivity. For more help on this error: Type 'hh netcfg.chm' In help, click Troubleshooting, then Error Messages, then 807 The VPN isn't full, and it's 100Mb dedicated fibre on both ends so I can't believe it's a connectivity issue - especially since I'm RDP'ed into the remote box whilst trying to do this! Any bright ideas as to what might be causing the problem? Thanks, Dylan

    Read the article

  • Cisco ASA 5510 ASDM: Setting up multiple public static ip addresses on a single interface and route

    - by ssjaken
    HI, i have a cisco ASA 5510 using ASDM version 6.3 We have a webserver that is been written very specifically and i was given super direct "DO NOT DEVIATE" directions. This server has to get traffic from 3 different PUBLIC ip's that we own. (our isp gave use a block of 12 static addresses) on 4 different ports. there are the directions i was given externalIP1:22 - 172.17.5.50:22 - SSH externalIP1:443 - 172.17.5.50:23040 - SIT externalIP2:443 - 172.17.5.50:33040 - STAGE externalIP3:443 - 172.17.5.50:43040 - PROD My first question is, using ASDM (my contract employer demands i use ASDM over CLI) how do i get three public addresses to work on one interface. We are authenticating on PPPoE. I know create a virtual interface with the static address but when i do i cannot ping the address from another offsite machine. secondly, where would i put the traffic redirect in. would i go ahead and create ACL's or just make NAT routes. Thanks.

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • Juniper Networks SRX240 as a office router?

    - by Jordan Mendelson
    We're a small (7 person) fast growing startup who just got our new office and we're having a 100 Mbps line installed from Cogent. I'm not familiar with Juniper devices, however the equivalent Cisco appears to be rather expensive. Features we'd like: Offsite VPN access (PPTP or L2TP IPsec) - something Mac compatible IPv6 support NAT - ideally supporting multiple outside addresses mapped to VLANs DHCP DNS forwarding would be nice QoS to keep our SIP phones happy (managed through RingCentral) VLANs for guest/internal The device is going to be connected to a set of SIP phones as well as two Ruckus 7962s for wireless access. Eventually I'd like to connect it to a Juniper ESX switch as we grow. Would a Juniper SRX240 handle this ok?

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • How do I set up failover for a single web server using two ISPs?

    - by Travis
    I have one web server and two WAN connections (1 cable, 1 DSL). DNS is run offsite, and points to the IP address assigned by one of the ISPs. How can I have the second connection take over when the primary one fails? I have seen that it is possible to have two A records, each pointing to a different IP, but it has several problems. What's the real solution to this? I imagine this is a very common issue. Thanks in advance!

    Read the article

  • Can I setup a link SQL server connection between servers on different networks?

    - by Glenn Slaven
    We have a production SQL server hosted offsite at a hosting company, and we have a staging environment within our own network. We want to be able to setup a SQL job that copies content from a table on the staging server to prod on a regular basis, and I think we need to setup a linked server connection to do this. What do I need to get the hosting company to do to allow us to set this up? We have RDP access to the production servers, I just need to know what network and security configurations need to happen from the hosting company's perspective so I can ask them to do it.

    Read the article

  • Online backup solution

    - by Petah
    I am looking for a backup solution to backup all my data (about 3-4TB). I have look at many services out there, such as: http://www.backblaze.com/ http://www.crashplan.com/ Those services look very good, and a reasonable price. But I am worried about them because of incidents like this: http://jeffreydonenfeld.com/blog/2011/12/crashplan-online-backup-lost-my-entire-backup-archive/ I am wondering if there is any online back solution that offers a service level agreement (SLA) with compensation for data loss at a reasonable price (under $30 per month). Or is there a good solution that offers a high enough level of redundancy to mitigate the risk? Required: Offsite backup to prevent data loss in terms of fire/theft. Redundancy to protect the backup from corruption. A reasonable cost (< $30 per month). A SLA in case the service provider faults on its agreements.

    Read the article

  • auto copy newest folders and images in MyPictures folder to USB drive

    - by TVersmet
    I drop lots of photos in a day into the My Documents/My Pictures folder. I archive at the end of each day to USB drives, to be stored offsite. What I would like is someway to automate the process. By example, a small app or script I can simply double-click and it will scan the My Pictures folder for the newest folders and images and copy them to the USB drive until the drive is full. It doesn't matter if I get redundancy from the previous days saved images, so long as the newest images and folders are always the first to get copied. Well, that's my request. Thanks for reading.

    Read the article

  • Rsync backup - detect new directory and backup only from that directory

    - by Pracovek
    New cpanel daily backup is creating separate directories for daily backup. This creates problem when I try to user rsync to do an offsite backup since I would like to rsync only latest data. E.g. On backup server I have directory "backup" and on server, from which we are pulling backups I get directories 2013-11-07, 2013-11-08 etc in backup directory. If I backup /backup directory on the server it will use allot more space so I would like to backup only latest directory in backup directory, eg 2013-11-08. Is there a way to detect latest directory in backup directory and pass that directory name to rsync for backup ?

    Read the article

  • HDD Carrier, like a soda carrier available at McDonalds?

    - by Jason Taylor
    We use external USB drives for backups, and they have to be stored offsite at the end of the week. Right now we have your standard external USB drive inside an enclosure. We were thinking about moving to a USB dock, and dock a bare HDD for backups, rather than having various sized and types of enclosures. If we were to do this, the drives need protection while being transported to/from the safety deposit box. Is there any kind of hard drive carrier that would let us slide two drives into it, and it would provide protection while the drives are carried around by non-technical people? I'm afraid such a product doesn't exist, but perhaps someone knows of something?

    Read the article

  • Password expiration notice for Active Directory

    - by keithosu
    Are there any tools/apps/scripts out there that will do password expiry notification for Windows 2008 Active Directory credentials? This is needed for our web apps that use Active Directory for LDAP authentication. The problem is those apps do not notify you that your password is going to expire when you login. We have many offsite users who do not have machines bound to the AD. So there is no way to let them know to reset their password. I'd like the user to be notified 30,7 and 1 day before it expires. I'd also like our help desk to get an email for the expiring passwords for the week and recently expired passwords. I've looked at oldcmp.exe from link text and that gets me my reports but it does not do the automation that I'm looking for on the individual users.

    Read the article

  • how to find a good data center?

    - by drewda
    At my start-up, we're getting to the point where we should be hosting our servers at a data center. I'd appreciate any tips and tricks y'all can offer on finding a reputable place to colocate our racks. Are there any Web sites with customer reviews of data centers or should I just be asking around at techie events? Are unlimited bandwidth plans a gimmick or becoming the norm? Is it worth establishing a redundant set of machines at a second data center from Day One? Or just do offsite back-ups? Thanks for your suggestions.

    Read the article

  • How to sync a folder on a remote computer to a server on a domain

    - by Pierre-Alain Vigeant
    We have a small remote office that often share data with us. I learned that the data is shared as a email attachment, but that obviously leads to versioning hell and overriding. I am looking for a way for then to synchronize a folder directly on our main office domain controller. I personally use LiveMesh, but I would like a tool that is synchronized to our server directly without a 3rd party hosting the data, since we already have an online backup service taking care of the offsite backup. What enterprise class tool would let us synchronize a folder from a remote computer that is out of our domain, into our the file server of our domain? The synchronization has to be two-way, e.g.: Someone from the remote office will create an invoice. Someone from our office will review it and make modification to it. The remote office need to see the change. Our server is on Windows 2003.

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • Windows Server 2008 SOMETIMES blocks connection to an FTP Site

    - by ssilver2k2
    Hello Everyone Recently, our company has been having issue connecting to a special FTP server that we use offsite to upload small files. We can connect to the test server perfectly fine, but when we use the production ftp, it only randomly connects. If I plug my laptop directly into the modem and pull one of our other static IP addresses, I can connect just fine, but if I use our internal network, it is hit or miss. Also, turning OFF the firewall does NOT change the status of our connection problems. It will still time out. The weird issue is that it sometimes works, and when it does, it instantly asks for credentials and logs us on for that session, but if we disconnect and try again, it may or may not work. This also coincidentally occurred after uninstalling hamachi, and it blew away our internet settings on our Windows Server 2008 box, but everything else is working fine, and it was working fine before hamachi was installed.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >