Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 145/1229 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • Ubuntu rm not deleting files

    - by ILMV
    My colleague and I have been struggling with deleting a directory and its contents. We are working on a new version of our websites source code on Ubuntu 8.04 (dir: /var/www/websites), what we want to do is delete the websites directory and recreate it from a .tar backup we created a couple weeks ago. The purpose of this is so we can run our deployment procedure in a local environment before we do so on our live / public environment. We use this command: rm -r websites This deletes the directory and the files within it. The problem occurs when we un-tar our backup file and view the website we are getting files that don't exist in the .tar backup, in fact these files were only created a few days ago and should have been deleted. We delete the directory once more in the manner stated above, we then create a new websites directory using the mkdir command. Strangely at this stage the 'deleted files' do not come back, but if we unpack our .tar file the 'deleted files' appear again. Is there a way to ensure these files are deleted, or at least the pointers that associate them with said directory. Our .tar backup does not include these files We do not want to use the shred command We do not want to use 3rd party applications Solution should be functional via terminal (SSH) Many thanks! EDIT Er... we fixed it. Turns out the files that are reappearing are because of a link we have to another directory (outside the /var/www/websites), we were restoring the link but not deleting the files on the other end. D'oh! Many thanks for your help guys... friday afternoon syndrome :-)

    Read the article

  • Correcting owner/permissions on damaged directory tree in linux

    - by mcs130
    I inadvertently made a backup copy of a directory recursively and forgot the -a (--preserve) switch when doing so. This damaged my backup directory (which contains data we need to access). The directory and all of its child folders and files comprise an installation of an application including postgress DB and solr files. The original copy was used to for a failed re-config attempt. Now I need to use the backup copy to start over, only the ownership of the backup copy is now root across everything and it is no longer usable (processes won't run due to ownership problems I created when I forgot the -a on the cp -r). I've re-installed a clean copy of the application into a 3rd location now (which has the correct owner/perms) and need to copy the owner/perms from this good directory over onto the damaged directory. What is the best way (if even possible) to do this. (I've Googled and seen things from perl scripting to setfacl/getfacl to do this but am unfortunately still confused). Apologies if this seems a dumb question. Thanks.

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • Robocopy failure with Windows Server 2008 Scheduled Task

    - by CC
    So I have a batch script for robocopy. Running this from the command line does exactly what I want. robocopy "D:\SQL Backup" \\server1\Backup$\daily /mir /s /copyall /log:\\lmcrfs4g\NavBackup$\robocopyLog.txt /np Then I create a Scheduled Task in Windows Server 2008. If I set up the task to use my Domain Admin account, great. But I'm trying to get it to run as a separate domain account for Scheduled Tasks. If I use that account, folders get created, but files aren't copied. I get the following error: 2011/02/17 15:41:48 ERROR 1307 (0x0000051B) Copying NTFS Security to Destination Directory D:\SQL Backup\folder\ This security ID may not be assigned as the owner of this object. I've verified my domain\Scheduled Tasks account has Full Control NTFS permissions on both the source and destination, and the Full Control Sharing on my hidden \server1\backup$ share. Just for giggles, I've tried adding the domain account to the local Administrators group on both servers. This works fine, but that seems like a lot of privileges just to copy files. Any ideas on what I'm missing?

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • "Safely remove hardware"...doesn't.

    - by Kev
    I have an external USB harddisk that I have scripted to safely shut down after a backup, so the backup operator can unplug it, and knows not to if the lights are still on for some reason. It's always worked fine using the DevEject command-line utility. This week it failed for some reason: DevEject 1.0 2003 c't/Matthias Withopf Ejecting 'USB Mass Storage Device' [USB\VID_0411&PID_002A\00000704C8D2]...FAILED (23,5) Error ejecting device USB Mass Storage Device, vetoed (15,5)! Worse yet, using the SRH tray icon, I click Stop, click OK, it pauses about 5 seconds with OK and Cancel greyed out, closes the sub-window, and then the main window with the Stop button still shows the device, and Stop is still available. I can keep doing that and it never gets rid of the device. I can still access it in Explorer. LockHunter reports that nothing is locking the drive. I've made no changes to the backup configuration or anything to do with the drive this week. Why the sudden flake-out? Short of a restart, which I can't do today before the backup operator goes home, how do I fix it?

    Read the article

  • tar Cannot stat: No such file or directory

    - by VVP
    Hi all, I have got this error in during my mail server backup: 2010-09-16 06:24:20 ERROR backup of /var/mail/vhosts failed: tar: Removing leading `/' from member names tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588471.Vfd00I16e0223M187263.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284587441.Vfd00I16e0220M85965.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588863.Vfd00I16e0225M370937.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284602404.Vfd00I16e022aM416444.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284594551.Vfd00I16e0228M678444.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284588944.Vfd00I16e0226M622591.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284587271.Vfd00I16e021fM96119.server.host-name\:2,: Cannot stat: No such file or directory tar: /var/mail/vhosts/host-name/0/user-name/.maildir/cur/1284599458.Vfd00I16e0229M181400.server.host-name\:2,: Cannot stat: No such file or directory tar: Error exit delayed from previous errors Is it happened because user deleted his messages? Is there any way how to prevent this? Well I am assuming it can be happened not only with e-mail backup. Can I rely on tar & gzip as a mail backup system?

    Read the article

  • HOW TO save Contacts from iPhone to Computer?

    - by goodm
    Step 1: Download [b]Tansee iPhone Transfer Contact[/b] free trial version [url="http://softseeking.com/prodail.aspx?proid=47"]here[/url],then install the software (skip if done yet). u can download at:[url="http://www.softseeking.com/prodail.aspx?proid=47"]http://www.softseeking.com/prodail.aspx?proid=47[/url] Step 2: Connect iPhone to your computer. Step 3: Run Tansee iPhone Transfer Contact , your contacts on iPhone memory will display as shown in your iPhone screen automatically as fig 1. Click on single name, all his or her information will display as fig 2 shown. Step 4-a: In fig 1 situation, you can click button "Copy" to copy all contacts from your iPhone to your computer , then select options: 1: Choose File Type: backup to TXT file, ANTC file or CSV file; 2: Choose File Path: You can change the backup path if you do not use default path. 3: Advanced Option: if you choose ANTC format in step 1, you can add a password to protect the file. Note: We do not know the password, so please do remember it. Click OK Button to finish the Copy. See fig 3. Note: You can only copy the first 5 contacts with trail version.[/SIZE] [/SIZE] [size=3][size=3]Step 4-b: In fig 2 situation, click button "Copy Contact From who" to copy contact of a single person, select options: 1: Choose File Type: Backup contacts to TXT file, and CSV file in single contact transfer; 2: Choose File Path: You can change the backup path if you do not use default path; 3: Advanced Option: Disabled in single contact transfer. Click OK Button to finish the Copy. See fig 4. Note: You can only copy the first 5 contacts in trail version.[/size] [/size]

    Read the article

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • NTbackup doesn't complete on system state

    - by Joe Majsterski
    I have a Windows 2003 server that is running a semi-custom backup task. The scheduled task calls NTbackup with a few switches depending on whether it is a full or incremental backup. Most of the time, the NTbackup completes fine, and the wrapper then appends the NTbackup log into its own log before adding a few final comments and completing. The problem I am having is that sometimes, NTbackup seems to just... blank out. It always completes backup of the C: and E: drives, but then it will start the system state and not add any more messages into the event log saying it completed that. And the NTbackup log is left empty, since it doesn't write anything to the log until all the backup tasks are complete. This is causing the wrapper to append no text into its own log. That causes problems for us because we read the information out of that log to determine whether backups are failing. The wrapper task also reports that it is completing normally in the event log. Anyone ever seen a case where system state doesn't complete consistently? To be clear, the server is not logging any error messages anywhere. It's just not seeming to complete or log anything.

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • Advice on cloning disk

    - by hks
    I'm going to buy a second disk for backup, the same size as my laptops. I want to mount it in a casing via usb and backup an entire hdd every soemtime. That's because I want the posibility to just switch drives in case of something goes wrong. I'm using Linux and obviously the right tool seems to be dd. The thing is that my laptop drive has a speed of around 50-70 MB/s and usb 2.0 is 57 MB/s. So to copy my 250GB disk should take me more than 1 hour if I'm lucky. I can't wait this much. I want some differential backup. I read one of JWZ articles. In it he gives more details for using rsync on Mac. He writes that there is possibility of making rsync'ed disk bootable. So my question is: how to make rsync'ed hdd bootable under Linux or are there other 'quick backup' tools for Linux that would allow me to just swap drives? Or should I just stick to dd :( ?

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • MVC Razor Engine For Beginners Part 1

    - by Humprey Cogay, C|EH, E|CSA
    I. What is MVC? a. http://www.asp.net/mvc/tutorials/older-versions/overview/asp-net-mvc-overview II. Software Requirements for this tutorial a. Visual Studio 2010/2012. You can get your free copy here Microsoft Visual Studio 2012 b. MVC Framework Option 1 - Install using a standalone installer http://www.microsoft.com/en-us/download/details.aspx?id=30683 Option 2 - Install using Web Platform Installer http://www.microsoft.com/web/handlers/webpi.ashx?command=getinstallerredirect&appid=MVC4VS2010_Loc III. Creating your first MVC4 Application a. On the Visual Studio click file new solution link b. Click Other Project Type>Visual Studio Solutions and on the templates window select blank solution and let us name our solution MVCPrimer. c. Now Click File>New and select Project d. Select Visual C#>Web> and select ASP.NET MVC 4 Web Application and Enter MyWebSite as Name e. Select Empty, Razor as view engine and uncheck Create a Unit test project f. You can now view a basic MVC 4 Application Structure on your solution explorer g. Now we will add our first controller by right clicking on the controllers folder on your solution explorer and select Add>Controller h. Change the name of the controller to HomeController and under the scaffolding options select Empty MVC Controller. i. You will now see a basic controller with an Index method that returns an ActionResult j. We will now add a new View Folder for our Home Controller. Right click on the views folder on your solution explorer and select Add> New Folder> and name this folder Home k. Add a new View by right clicking on Views>Home Folder and select Add View. l. Name the view Index, and select Razor(CSHTML) as View Engine, All checkbox should be unchecked for now and click add. m. Relationship between our HomeController and Home Views Sub Folder n. Add new HTML Contents to our newly created Index View o. Press F5 to run our MVC Application p. We will create our new model, Right click on the models folder of our solution explorer and select Add> Class. q. Let us name our class Customer r. Edit the Customer class with the following code s. Open the HomeController by double clickin HomeController of our Controllers folder and edit the HomeControllerusing System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc;   namespace MyWebSite.Controllers {     public class HomeController : Controller     {         //         // GET: /Home/           public ActionResult Index()         {             return View();         }           public ActionResult ListCustomers()         {             List<Models.Customer> customers = new List<Models.Customer>();               //Add First Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 1,                         CompanyName = "Volvo",                         ContactNo = "123-0123-0001",                         ContactPerson = "Gustav Larson",                         Description = "Volvo Car Corporation, or Volvo Personvagnar AB, is a Scandinavian automobile manufacturer founded in 1927"                     });                 //Add Second Customer to Our Collection             customers.Add(new Models.Customer()                     {                         Id = 2,                         CompanyName = "BMW",                         ContactNo = "999-9876-9898",                         ContactPerson = "Franz Josef Popp",                         Description = "Bayerische Motoren Werke AG,  (BMW; English: Bavarian Motor Works) is a " +                                       "German automobile, motorcycle and engine manufacturing company founded in 1917. "                     });                 //Add Third Customer to Our Collection             customers.Add(new Models.Customer()             {                 Id = 3,                 CompanyName = "Audi",                 ContactNo = "983-2222-1212",                 ContactPerson = "Karl Benz",                 Description = " is a multinational division of the German manufacturer Daimler AG,"             });               return View(customers);         }     } } t. Let us now create a view for this Class, But before continuing Press Ctrl + Shift + B to rebuild the solution, this will make the previously created model on the Model class drop down of the Add View Menu. Right click on the views>Home folder and select Add>View u. Let us name our View as ListCustomers, Select Razor(CSHTML) as View Engine, Put a check mark on Create a strongly-typed view, and select Customer (MyWebSite.Models) as model class. Slect List on the Scaffold Template and Click OK. v. Run the MVC Application by pressing F5, and on the address bar insert Home/ListCustomers, We should now see a web page similar below.   x. You can edit ListCustomers.CSHTML to remove and add HTML codes @model IEnumerable<MyWebSite.Models.Customer>   @{     Layout = null; }   <!DOCTYPE html>   <html> <head>     <meta name="viewport" content="width=device-width" />     <title>ListCustomers</title> </head> <body>     <h2>List of Customers</h2>     <table border="1">         <tr>             <th>                 @Html.DisplayNameFor(model => model.CompanyName)             </th>             <th>                 @Html.DisplayNameFor(model => model.Description)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactPerson)             </th>             <th>                 @Html.DisplayNameFor(model => model.ContactNo)             </th>         </tr>         @foreach (var item in Model) {         <tr>             <td>                 @Html.DisplayFor(modelItem => item.CompanyName)             </td>             <td>                 @Html.DisplayFor(modelItem => item.Description)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactPerson)             </td>             <td>                 @Html.DisplayFor(modelItem => item.ContactNo)             </td>                   </tr>     }         </table> </body> </html> y. Press F5 to run the MVC Application   z. You will notice some @HTML.DisplayFor codes. These are called HTML Helpers you can read more about HTML Helpers on this site http://www.w3schools.com/aspnet/mvc_htmlhelpers.asp   That’s all. You now have your first MVC4 Razor Engine Web Application . . .

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • Automating the Choose a digital certificate dialog

    - by MoMo
    I am using WatiN (2.0.10.928) with C# and Visual Studio 2008 to test a SSL secured website that requires a certificate. When you navigate to the homepage a "Choose a digital certificate" dialog is displayed and requires that you select a valid certificate and click the 'OK' button. I'm looking for a way to automate the certificate selection so that every time a new test or fixture is executed (and my browser restarts) I don't have to manually interfere with the automated test and select the certificate. I've tried using various WatiN Dialog Handler classes and even looked into using the Win32 API to automate this but haven't had much luck. I finally found a solution but its adds another dependency to the solution (a third party library called AutoIT). Since this solution isn't ideal but does work and is the best I could find, I will post the solution and mark it as the answer but I am still looking for an 'out of the box' WatiN solution that is more consistent with the rest of my code and test fixtures. Thanks for your responses!

    Read the article

  • How to count each digit in a range of integers?

    - by Carlos Gutiérrez
    Imagine you sell those metallic digits used to number houses, locker doors, hotel rooms, etc. You need to find how many of each digit to ship when your customer needs to number doors/houses: 1 to 100 51 to 300 1 to 2,000 with zeros to the left The obvious solution is to do a loop from the first to the last number, convert the counter to a string with or without zeros to the left, extract each digit and use it as an index to increment an array of 10 integers. I wonder if there is a better way to solve this, without having to loop through the entire integers range. Solutions in any language or pseudocode are welcome. Edit: Answers review John at CashCommons and Wayne Conrad comment that my current approach is good and fast enough. Let me use a silly analogy: If you were given the task of counting the squares in a chess board in less than 1 minute, you could finish the task by counting the squares one by one, but a better solution is to count the sides and do a multiplication, because you later may be asked to count the tiles in a building. Alex Reisner points to a very interesting mathematical law that, unfortunately, doesn’t seem to be relevant to this problem. Andres suggests the same algorithm I’m using, but extracting digits with %10 operations instead of substrings. John at CashCommons and phord propose pre-calculating the digits required and storing them in a lookup table or, for raw speed, an array. This could be a good solution if we had an absolute, unmovable, set in stone, maximum integer value. I’ve never seen one of those. High-Performance Mark and strainer computed the needed digits for various ranges. The result for one millon seems to indicate there is a proportion, but the results for other number show different proportions. strainer found some formulas that may be used to count digit for number which are a power of ten. Robert Harvey had a very interesting experience posting the question at MathOverflow. One of the math guys wrote a solution using mathematical notation. Aaronaught developed and tested a solution using mathematics. After posting it he reviewed the formulas originated from Math Overflow and found a flaw in it (point to Stackoverflow :). noahlavine developed an algorithm and presented it in pseudocode. A new solution After reading all the answers, and doing some experiments, I found that for a range of integer from 1 to 10n-1: For digits 1 to 9, n*10(n-1) pieces are needed For digit 0, if not using leading zeros, n*10n-1 - ((10n-1) / 9) are needed For digit 0, if using leading zeros, n*10n-1 - n are needed The first formula was found by strainer (and probably by others), and I found the other two by trial and error (but they may be included in other answers). For example, if n = 6, range is 1 to 999,999: For digits 1 to 9 we need 6*105 = 600,000 of each one For digit 0, without leading zeros, we need 6*105 – (106-1)/9 = 600,000 - 111,111 = 488,889 For digit 0, with leading zeros, we need 6*105 – 6 = 599,994 These numbers can be checked using High-Performance Mark results. Using these formulas, I improved the original algorithm. It still loops from the first to the last number in the range of integers, but, if it finds a number which is a power of ten, it uses the formulas to add to the digits count the quantity for a full range of 1 to 9 or 1 to 99 or 1 to 999 etc. Here's the algorithm in pseudocode: integer First,Last //First and last number in the range integer Number //Current number in the loop integer Power //Power is the n in 10^n in the formulas integer Nines //Nines is the resut of 10^n - 1, 10^5 - 1 = 99999 integer Prefix //First digits in a number. For 14,200, prefix is 142 array 0..9 Digits //Will hold the count for all the digits FOR Number = First TO Last CALL TallyDigitsForOneNumber WITH Number,1 //Tally the count of each digit //in the number, increment by 1 //Start of optimization. Comments are for Number = 1,000 and Last = 8,000. Power = Zeros at the end of number //For 1,000, Power = 3 IF Power 0 //The number ends in 0 00 000 etc Nines = 10^Power-1 //Nines = 10^3 - 1 = 1000 - 1 = 999 IF Number+Nines <= Last //If 1,000+999 < 8,000, add a full set Digits[0-9] += Power*10^(Power-1) //Add 3*10^(3-1) = 300 to digits 0 to 9 Digits[0] -= -Power //Adjust digit 0 (leading zeros formula) Prefix = First digits of Number //For 1000, prefix is 1 CALL TallyDigitsForOneNumber WITH Prefix,Nines //Tally the count of each //digit in prefix, //increment by 999 Number += Nines //Increment the loop counter 999 cycles ENDIF ENDIF //End of optimization ENDFOR SUBROUTINE TallyDigitsForOneNumber PARAMS Number,Count REPEAT Digits [ Number % 10 ] += Count Number = Number / 10 UNTIL Number = 0 For example, for range 786 to 3,021, the counter will be incremented: By 1 from 786 to 790 (5 cycles) By 9 from 790 to 799 (1 cycle) By 1 from 799 to 800 By 99 from 800 to 899 By 1 from 899 to 900 By 99 from 900 to 999 By 1 from 999 to 1000 By 999 from 1000 to 1999 By 1 from 1999 to 2000 By 999 from 2000 to 2999 By 1 from 2999 to 3000 By 1 from 3000 to 3010 (10 cycles) By 9 from 3010 to 3019 (1 cycle) By 1 from 3019 to 3021 (2 cycles) Total: 28 cycles Without optimization: 2,235 cycles Note that this algorithm solves the problem without leading zeros. To use it with leading zeros, I used a hack: If range 700 to 1,000 with leading zeros is needed, use the algorithm for 10,700 to 11,000 and then substract 1,000 - 700 = 300 from the count of digit 1. Benchmark and Source code I tested the original approach, the same approach using %10 and the new solution for some large ranges, with these results: Original 104.78 seconds With %10 83.66 With Powers of Ten 0.07 A screenshot of the benchmark application: If you would like to see the full source code or run the benchmark, use these links: Complete Source code (in Clarion): http://sca.mx/ftp/countdigits.txt Compilable project and win32 exe: http://sca.mx/ftp/countdigits.zip Accepted answer noahlavine solution may be correct, but l just couldn’t follow the pseudo code, I think there are some details missing or not completely explained. Aaronaught solution seems to be correct, but the code is just too complex for my taste. I accepted strainer’s answer, because his line of thought guided me to develop this new solution.

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • In CAB is a service it's own module?

    - by David Anderson
    I'm learning Composite Application Block and I've hit a rock about services. I have my shell application in its own solution, and of course a test module in its own solution (developed and testing completely independent and external of the shell solution). If I created a service named "Sql Service", would I need to put this in it's own library, so that both the shell, and the module know the types? If that's the case, then for good practice, should I put the service project in the shell solution, or external just like a module (in it's own solution), even though it's not loaded as a module? Then, what about references? Should the shell reference this directly, add then add the service? Or load it as a module and add the service? I have a lot of confusion on where I should create my services, and if I should reference or load as modules..

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >