Search Results

Search found 6879 results on 276 pages for 'azure storage blobs'.

Page 232/276 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • I need an admin toolset for Windows 2003 and 2008

    - by eugeneK
    i know this is way too general question but anyway. I need few tools, will write down my tasks as sysadmin and if you have any to automate my job i would be glad to hear. I don't mind paying for software needed unless it is way too expensive. First of i backup all files on server at local/office storage. I 7zip all SQL backup files and then move them over network to centralized location and then FTP them from office PC which has no FTP server installed and cannot have one. Backups happen at 4AM at the morning thus i need to set time for compressing and afterward FTPing. Then i FTP all IIS web application as differentiation backup, same goes for VOD movies. Second tool i need is system monitor which will monitor all servers from themselves and from external location for CPU/Memory/Hard disk and other basic failures. This tool should able to execute Website address with parameters which will send me an email with all report on failure. Third tool i need is a way to get all Event Logs from 10 Windows based servers without accessing each any of them manually. If you know any solution, thanks in advance.

    Read the article

  • SQL Server 2008 R2 mirroring failing

    - by andriusn
    I have two Windows 2008 R2 (Amazon EC2) instances running SQL Server 2008 R2. I use 9TB striped disks (9x1TB EBS volumes) for storage. One server is running as principal and second one as mirror. Both started from the same image, database and tlog files located on striped disk. Mirror server failed 3 times in last 2 months with errors: EventID 823 The operating system returned error 2(The system cannot find the file specified.) to SQL Server during a write at offset 0x00000048058a00 in file 'D:\TLogs***_log.ldf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. and EventID 1454 Database mirroring will be suspended. Server instance 'xxxxxxxxxx' encountered error 823, state 6, severity 24 when it was acting as a mirroring partner for database '***'. The database mirroring partners might try to recover automatically from the error and resume the mirroring session. For more information, view the error log for additional error messages. followed by EventID 19019 The MSSQLSERVER service terminated unexpectedly. After this rebooting instance is necessary to restore mirroring. First two times I thought it was hardware related (striped disk failure) and relaunched instance on new hardware. But the issue is back after few weeks again. It only affects mirror instances. Any help would be really appreciated. Thanks.

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Symbolic links not working in MySQL

    - by Eno
    I'm having an issue, I searched a lot but I'm not sure if it's related to a previous security patch. On the last version of MySQL on Debian Lenny ( 5.0.51a-24 ) I need to share one table between two db, those two db are in the same path ( /var/lib/mysql/db1 & db2 ). I created symbolic links for db2 pointing to the table in db1. When I query the same table from db2 I get this : 'ERROR 1030 (HY000): Got error 140 from storage engine' This is how it looks : test-lan:/var/lib/mysql/test3# ls -alh drwx------ 2 mysql mysql 4.0K 2010-08-30 13:28 . drwxr-xr-x 6 mysql mysql 4.0K 2010-08-30 13:29 .. lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.frm -> /var/lib/mysql/test/blbl.frm lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYD -> /var/lib/mysql/test/blbl.MYD lrwxrwxrwx 1 mysql mysql 28 2010-08-30 13:28 blbl.MYI -> /var/lib/mysql/test/blbl.MYI -rw-rw---- 1 mysql mysql 65 2010-08-30 13:24 db.opt I really need those symlinks, is there a way to make them working like before ? ( old MySQL-server is fine ) Thanks,

    Read the article

  • Regular issue with keys on temp tables

    - by Christian
    We run a large forum with lots of reads and writes, particularly to the posts and topics tables which are both innodb. Last week I started doing 12 hourly backups with innobackupex because mysqldump just takes forever (7+ million rows in posts table.) It seems that something doesn't like these backups because I have a recurring problem every other day. The symptoms; The front page of the site starts throwing errors The logs start showing errors like Error: 126 - Incorrect key file for table '/tmp/mysql/#sql_4e87_14.MYI'; try to repair it The /tmp/ dir fills up and we start getting Error: 1030 - Got error 28 from storage engine in the logs. The only way to fix is to optimize table on each of the posts and topics tables. I'm trying all I can to stop MySQL using disks for temp tables, but I'd have more problems than this if it used all my memory also. My my.cnf is here; https://gist.github.com/cbiggins/0aa26f6defb7a14541d7 The box has 32GB memory and I don't come near that usually. Currently at 15GB use. Thanks in advance. Update 1: Despite the conf looking like there is replication, there isn't. This is a stand alone instance.

    Read the article

  • How much memory will a Windows file-server be able to use effectively.

    - by Zoredache
    In the near future we will be moving our fileserver to a newer box that will be running Windows 2008R2. I want to know how much memory Windows will be able to use for a system that is just a file-server. In searching around I found an old document for Windows 2000 that mentions the maximum size of the file-system cache is 960MB. I suspect this limit no longer applies, but is there a new limit? The file server will be just a standard Windows fileserver. It will have 1TB of attached storage. The large majority of the of the files accessed during the day are just typical Office documents. There are 80-100 people usually using the fileserver during a typical day. This system will only be used as a file server, it doesn't have any other roles. In Windows 2008r2 is there any hard limits for the filesystem cache? What are they? The server we will be re-using for this purpose currently has 4GB of memory, but it can be maxed out at 16GB. Is there any value in doing this for a Windows file-server? Are there any performance counters can I look at on the existing 2003 fileserver that will tell me if adding more memory will be worthwhile.

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • How to use Zune software to listen to podcasts with generic MP3 player?

    - by Bevan
    I listen to a bunch of podcasts - a great way to fill the otherwise mindless space of a daily commute. My MP3 player is a Transonic brand, appears on my computer as a generic storage device. I've been using iTunes to download the podcasts, and manually moving the files out of the disk folder onto my player, but this is pretty tedious. iTunes also fails to recognise that the files are gone and leaves them in the list. (Actually, iTunes for windows is pretty much a dog, but that's a different rant.) The Zune software is 99% of what I want in a podcast downloader - performs well, looks nice, downloads reliably and so on. Some features - like only downloading the next five unheard episodes of a podcast - are superb. However, if I manually move the files across to the MP3 player, the Zune software concludes that the file has never been downloaded, and downloads it again. This leads me to my question: What is a good way to use the Zune software to download podcasts for listening on a generic MP3 player? Are there any addons for the Zune software to make this easier? Registry hacks? Can I configure the Zune software to not download the same episode multiple times? Is there a way for the Zune software to populate my MP3 player directly, instead of having to copy files?

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • Windows Home Server installation fails because it can't find the DVD drive anymore

    - by BBlake
    I've got an old Dell Dimension 8300 desktop I decided to convert into a WHS box. I popped in a pair of 1 TB SATA drives, which were recognized fine by the BIOS and the currently installed OS (XP), so I decided to go ahead and install WHS. Near the end of the installation, WHS acts like it can no longer find the DVD drives (either of them, the box has a DVDROM and a DVDRW). The specific error is gives is the "Can't configure storage" error. I've found several forums where people say they get this error if they remove the boot DVD during the installation (at the time of the first reboot). However, I never removed the DVD. After the error, if fails into WHS, so it did mostly install and I can work with WHS. However, it refuses to recognize the network card, video card and while it shows the two DVD drives, any CD/DVD I insert in either drive the system says is corrupted and unreadable, even though none of them are. I've tried several reinstalls both removing and not removing the DVD, but the result is the same regardless. Any other tricks anyone found? If I can't figure this out, maybe I'll just install SBS2008 and fake it up to be similar to WHS with some addin tools. Shouldn't be too hard to create something since WHS is based on SBS2003 anyway.

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Need Suggestions on Backup Strategies and Alternatives?

    - by Leejo
    I'm not sure where else to post this question since it is not exactly Code or Development related...but I know Stackoverflow is a very responsive to questions... Currently, I use Mozy Home to perform an online backup of my laptop. So far, this works well, since I only use one laptop that needs to be backed up. But, soon this may change and I want to explore other alternatives than having to perform an online backup on all machines. Ideally, I want to set up a Network Computer (Laptop/Desktop) with enough storage to hold the backups for all other machines that I would have. Each machine should be responsible for performing their backup (to the Network Computer). This would require some capability like Mozy's incremental backup strategy, but instead of online backup, I would prefer it to be done locally to the Network Computer. Can you recommend a local backup software (backup to a network pc, incremental backup, good restore options)? I'm also looking for any ideas on a local backup strategy even if its different from what I've stated? What works and what doesn't work? Thanks in advance for your help!

    Read the article

  • Load Sharing Regarding Large Websites

    - by JHarley1
    Hello, I have a question regarding Load Sharing for large websites. My Understanding: So if you have a website that has millions of fits a day you will need to have an architecture that can support this sort of pressure. You can either do one or two things: Invest in a single large server that has huge amounts of processing power, memory and storage (such as Microsoft's TerraServer). Spread the load of your website across a number of machines. Let me tackle the second approach, so you have a collection of machines all running Web Server Software and all having access to identical copies of the websites pages. You can either spread the load across these machines using a cyclic pattern in a DNS or you can use a Load Ballancing Switch. The advantages of this approach is: - Redundancy - servers can fail and the others would "pick up the slack" - Incremental - the ability to easily add new machines to this set-up. My Question's Is there a Virtual approach to this issue of load balancing now? If the website runs from a database - is there still only a single copy of the database? If a user had a session running on one Server (e.g. they had gone to www.example.org and had been assigned to Server 2 - were they had created a session) if they refreshed the website (and were allocated Server 3) would they still have their session? What are the other disadvantages associated with Load Balancing? Many Thanks, J

    Read the article

  • Ubuntu Device-mapper seems to be invincible!

    - by Andrew Bolster
    I'm working on a hopefully unrelated question question and I've got to a strange situation. First: I know very little about the very low level hardware kernal storage driver magix, so I'm hoping a) someone can help and b) someone can explain it to me better. I've been trying a dozen different configurations of my 2x500GB SATA drives over the past few hours involving switching between ACHI/IDE/RAID in my bios; After each attempt I've reset the bios option, booted into a live CD, deleting partitions and rewriting partition tables left on the drives. Now, however, I've been sitting with a /dev/mapper/nvidia_XXXXXXX1 that seems to be impossible to kill! its the only 'partition' that i see in the Ubuntu install (but I can see the others in parted) but it is only the size of one of the drives, and I know I did not set any RAID levels other than RAID0. Anyone have any ideas how I can kill this and get back to just two independent IDE drives? Or can anyone convince me of a reason to go the AHCI route? Many thanks in advance.

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • Abysmal transfer speeds on gigabit network

    - by Vegard Larsen
    I am having trouble getting my Gigabit network to work properly between my desktop computer and my Windows Home Server. When copying files to my server (connected through my switch), I am seeing file transfer speeds of below 10MB/s, sometimes even below 1MB/s. The machine configurations are: Desktop Intel Core 2 Quad Q6600 Windows 7 Ultimate x64 2x WD Green 1TB drives in striped RAID 4GB RAM AB9 QuadGT motherboard Realtek RTL8810SC network adapter Windows Home Server AMD Athlon 64 X2 4GB RAM 6x WD Green 1,5TB drives in storage pool Gigabyte GA-MA78GM-S2H motherboard Realtek 8111C network adapter Switch dLink Green DGS-1008D 8-port Both machines report being connected at 1Gbps. The switch lights up with green lights for those two ports, indicating 1Gbps. When connecting the machines through the switch, I am seeing insanely low speeds from WHS to the desktop measured with iperf: 10Kbits/sec (WHS is running iperf -c, desktop is iperf -s). Using iperf the other way (WHS is iperf -s, desktop iperf -c) speeds are also bad (~20Mbits/sec). Connecting the machines directly with a patch cable, I see much higher speeds when connecting from desktop to WHS (~300 Mbits/sec), but still around 10Kbits/sec when connecting from WHS to the desktop. File transfer speeds are also much quicker (both directions). Log from desktop for iperf connection from WHS (through switch): C:\temp>iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [248] local 192.168.1.32 port 5001 connected with 192.168.1.20 port 3227 [ ID] Interval Transfer Bandwidth [248] 0.0-18.5 sec 24.0 KBytes 10.6 Kbits/sec Log from desktop for iperf connection to WHS (through switch): C:\temp>iperf -c 192.168.1.20 ------------------------------------------------------------ Client connecting to 192.168.1.20, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 192.168.1.32 port 57012 connected with 192.168.1.20 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-10.3 sec 28.5 MBytes 23.3 Mbits/sec What is going on here? Unfortunately I don't have any other gigabit-capable devices to try with.

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • Somewhat powerful server needed for computationally expensive stuff

    - by Dane Larsen
    So here's my problem. My Dad runs a company that does some rather computationally expensive stuff. This is not supercomputer level stuff, but it does take several hours to run the average job on his Core i7 desktop. He asked me to look into a way to have his customers use the code on an hourly basis, namely via a server. Ideally he'd be able to buy a box for about $1000, and hook it right up to our home connection. Unfortunately, the data that needs to be both sent and received is on the order of several hundred megs. We live in a rural area, and the fastest connection offered is 1.5Mbit/s. Download. It's like .3Mbit/s upload. Not workable. What are the options for this kind of thing? Ideally, we'd have about 2GB of ram, 300-500GB of storage, and a nice dual core, and it has to run some flavor of Linux. Any suggestions? Thanks in advance EDIT: Also, ideally the monthly price would be < $100 per month.

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • USB drive dead after stopping copying process on Snow Leopard Server

    - by Anriëtte Combrink
    Hi there I was copying to a flash drive from our Snow Leopard server when I stopped the copying process half way through. The device then disappeared from the Desktop. So I unplugged it and plugged it right back in. The device just didn't show up. I unplugged it and plugged it into a Windows XP machine as well as a Windows 7 machine. On both machines, I right clicked "My Computer" and selected "Manage…". On both PC's, the device was located under Removable Storage, but had no size and no drive letter. It shows up in "My Computer", but when I choose "Format…" from the right-click menu (context menu), it says the drive could not be formatted. Can someone please advise me? The flash drives is about 5 mins old and should have no reason to be dead. I really can't loose this drive (I don't need the data on it, I just need it to work again), any help would be appreciated. Thanks in advance.

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • SQL server agent job to execute SSIS package fails, package succeds if run manually

    - by growse
    I've got a SSIS package installed on a SQL server (SQL Server 2012). It's fairly simple and just fetches data from a remote data source and adds it into a local table. The remote connection string is using SQL server authentication, while the local connection is using Windows auth. The remote connection password is protected, and the package was imported setting the protection level to Rely on server storage and roles for access control. If I run the SSIS package manually, it works. If I run it from the command line using dtexec, it works. If I use runas to switch to the domain account that the SQL server agent is running under, and then run the package using dtexec, it works. If I create a SQL Agent job with a single step to run the package, it fails, providing very little detail as to what's going on. I'm guessing it's not able to get the password to log into the remote SQL server, because it fails very quickly. Also, if I tick 'log to table' and view the resulting file, I get the following: Description: ADO NET Source has failed to acquire the connection {0D8F2CD4-A763-4AEB-8B52-B8FAE0621ED3} with the following error message: "Login failed for user 'username'.". If I try to add the password in the connection string manually under data sources in the job step dialog, it refuses to save it, always seeming to remove the 'password' bit of the connection string. I thought that SQL server agent jobs always ran under the context of the account which the SQL server agent is running under. This account is a sysadmin on the local SQL server, and the package works using dtexec under that account, so why would it fail when trying to run as an agent job?

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >