Search Results

Search found 8843 results on 354 pages for 'partition master'.

Page 256/354 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • Windows 7 Extend C Volume to Unallocated Space

    - by user327777
    a while back I installed Ubuntu and then later uninstalled it by I think deleting the partitions and recovering the windows 7 boot loader. I am not that experienced with partitioning yet. As you can see here there are two partitions that are now unallocated. The 9gb one is a recovery or something that came with the computer. How can I extend my C partition to use both of those? I do not want to have that much storage just wasted sitting there. Currently when I right click on C and hit extend the wizard pops up but there is no available space to extend. http://i.imgur.com/VxEkdyR.png http://i.imgur.com/DdFZWX9.png Thanks everyone!

    Read the article

  • Boot custom linux up by pressing Lenovo OKR button?

    - by Semmu
    I have a Lenovo Y510p laptop and I'm a Linux user, use Windows only for gaming. The device had no OS when I bought it and I also installed an SSD besides the 1TB hard drive. I would like to "hack" the One-Key-Recovery button, because I have no interest in its default behaviour (I don't need Windows recovery), but if I could boot up a hidden, fail-safe Linux with it, that would be great. How could I achieve it? I tried to search what the button does, but I only found some installers for Windows that could magically create a partition for the recovery. I would like to override this behaviour completely to boot up something else.

    Read the article

  • How to fix a Corrupted USB

    - by Help
    My USB stick has suddenly stopped working. It's a Busbi 4GB. My USB used to be G:/ but as soon as I plugged it in, I used to get a pop up box showing that it was plugged in. Now, when I plug this in, it shows as I:/ and no pop up box appears. It shows in my computer as I:/ and when I click to open it says I:/ is not accessible the disk structure is corrupted and unreadable. I have tried to change the file name back to G:/ but nothing happened (this was under disk management). On disk management, it shows Volume as I:/ Layout simple Type Basic File system RAW status Healthy (Active,Primary partition) Capacity 3.42GB. I've tried right clicking properties then the tab tools and click error checking (this option will check the volume for errors). When I click "check now" it comes up with the disk check could not be performed because Windows cannot access the disk.

    Read the article

  • How can I push a git repository to a folder over SSH?

    - by Rich
    I have a folder called my-project inside which I've done git init, git commit -a, etc. Now I want to push it to an empty folder at /mnt/foo/bar on a remote server. How can I do this? I did try, based on what I'd read: cd my-project git remote add origin ssh://user@host/mnt/foo/bar/my-project.git git push origin master which didn't seem right (I'd assume source would come before destination) and it failed: fatal: '/mnt/boxee/git/midwinter-physiotherapy.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly I'd like this to work such that I don't have to access the remote host and manually init a git repository every time ... do I have to do that? Am I going down the right route at all? Thanks.

    Read the article

  • TrueCrypt Corrupted Files

    - by B. Knight
    Several months ago, I needed to reorganize my data across multiple external hard drives with my laptops primary hard drive as the go-between. My external hard drives are all encrypted with TrueCrypt. It appears to me that somehow during the transfer of my files between the encrypted external drive an the unencrypted internal drive, the files were transferred "as-is" (in their encrypted state). The files range from very small to very large. It appears that this may have happened during one consecutive transfer session. Has anyone ever experienced this problem, and if so were you able to fix it? Is there a way to recreate the encrypted partition, transfer the files, and then decrypt them to their usable state? Or can the files somehow be decrypted through other means? UPDATE: I am running Windows 7 (x64) HP now, but may have been runninG ENT. then. Toshiba Laptop 650GB HDD / 4GB Mem. Latest version of TC

    Read the article

  • Windows 8: 100% disk active time, no actual data transferred

    - by fingerbangpalateclick
    Occasionally, like several times an hour, my hard drive will appear to lock up: Task Manager will show 100% active time with read and write speeds of 0. I can still switch between open windows, but anything that requires a disk access will stall for around a minute until the hard disk starts working properly again. It happens at apparently random intervals, and only happens in Windows 8. Not 7, nor Linux. It is probably not a problem with the disk itself: This is a relatively new hard drive, and S.M.A.R.T. is showing no errors. Only happens in Windows 8: not any other OS that has used the same partition, or different partitions. So, what is going on? How can I fix this? Note: this is a different problem then this one: Extremely high disk activity without any real usage My task manager would look similar, but Average Response Time, Read Speed, and Write Speed would all be 0.

    Read the article

  • Backing up to USB drives in 2008 R2

    - by jbbarnes
    I set up backups in 2008 R2 to backup to a USB drive, and the backup program formatted the drive so it doesn't appear with a drive letter. Rotating to the next of four backup drives breaks the backups because it has an NTFS partition on it. I don't see a way to prepare each of my USB drives to be used for backups. What kind of formatting is the backup program expecting to find and write to and how do I ensure my other USB drives can be rotated? Thanks.

    Read the article

  • Rebuild mdadm RAID5 array with fewer disks

    - by drjeep
    I have a 4 disk RAID5 array, one of which is starting to fail according to smartd. However, since I'm using less than half the space on /dev/md0, I'd like to rebuild the array without the failing disk. The closest scenario I've been able to find online has been this post, however it contains bits that don't apply to me (LVM volumes) and also doesn't explain how I go about resizing the partition after I'm done. Please note I have backups of important data, but I'd like to avoid rebuilding the array from scratch if possible.

    Read the article

  • Can I move files from a laptop hard drive with a corrupt sector to a USB hard drive?

    - by Corey
    I have a hard drive that is on its way out and won't boot to Windows 7. The Windows partition takes up the whole disk. I thought I would try to recover some recent files that hadn't been backed up. Assuming the files are recoverable, how can I explore the drive that has the corrupt sector and transfer files to a USB hard drive? If it helps, the laptop is able to see the USB drive when choosing a boot order. Some searching lead me to WinPE 3.0, part of the Windows Automated Install Kit. Is that a method?

    Read the article

  • Using windows 7 and fedora

    - by vedant1811
    I need to partition my hard disk for windows and fedora (root, swap, users). I thought of creating 3 (primary drives). 2 small ones (~5GB) each for win7 and fedora and a large (~700GB) for common files (pictures, vidoes, documents, etc.) one. Please Tell me which file systems to use in each case and the set-up of primary and extended partitions. Also I want to know, where and on which file system should the Linux Grub (my choice of OS chooser) be installed. I have just a bought a new Asus K53S and using the Fedora Installer Partitioner (Anaconda). Your help is greatly appreciated.

    Read the article

  • Recover snap server data

    - by Ugg
    Hi I have a snap server 110 the machine powers on ok and the healthcheck passes but unable to connect no responce on the assigned ip or any ability to reach the device via the snap server manager. Believe the device is powering on but not loading the OS. Tried pulling the disk running and hooking up to a windows PC via USB, and using disk internals linux reader I am unable to access two of the partitions. ( one of which is the large data partition). There are three partitions on the the drice only one is accessible via Linux reader. I am looking to recover the data of the drive can anyone suggest a DIY option please?

    Read the article

  • GitHub updating repository?

    - by user1804933
    I am trying to setup GitHub on my server and gotten to the point where I am running the command "git push -u origin master". However, a large file was detected and the following error was received: remote: error: GH001: Large files detected. remote: error: Trace: 5520a70fd2eeaa2eafd7de049a590fb5 remote: error: See http://git.io/iEPt8g for more information. remote: error: File app/logs/dev.log is 2041.59 MB; this exceeds GitHub's file size limit of 100 MB I ended up deleting that file and tried adding the git again but I keep running into that error. Any ideas on how to work around this?

    Read the article

  • How to force a drop of MSSQL Server database

    - by ng01
    I am trying to delete an MSSQL Server database, however I am having no luck. I have tried multiple things such as user ALTER DATABASE my_database SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE my_database; GO I have also tried to right click on it a delete it. This does not work, it tells me "Cannot drop database "ima_debts" because it is currently in use". The thing is there is definately no other user connected to it. In fact I disabled TCP/IP for the database and restarted it. Not even "Microsoft SQL Server Management Studio (Administrator)" is connected to it. I have made sure to login to "master". Why is it telling me it is currently in use. Is it possible for me to delete perhaps a directory or something from the file system to get rid of this database? Any help would be appreciated. Thanks.

    Read the article

  • The volume "filesystem root" has only 0 bytes disk space remaining?

    - by radek
    I installed 11.10 ~two weeks ago and run into some strange troubles recently. Installation was on brand new laptop with clear 128GB SSD. I opted for encrypting home directory. Apart from that I accepted defaults during the installation. There is no other OS on my laptop. I had circa 40GB in use when (for the third time) I got to see this very unpleasant window: Twice situation was pretty bad and whole system slowed down considerably. After reboot I could not login to graphical interface (with an error message informing about insufficient space) and had to remove some files from command line first. Third time I still managed to quickly delete some files and it helped. My laptop is mainly work environment: so no torrents, games, just two movies. Only media filling space are ~20GB of pictures, and bunch of pdfs. Working mostly on PostgreSQL & PostGIS, GeoServer and QGIS recently. Although I had lots of opportunities to test and practice my backups I would be extremely grateful if somebody could point me to any potential solutions to this problem. My laptop has been bought just before I installed Ubuntu, and it came without OS. Could that be hardware issue? Or is the encrypted home causing me headaches? Thanks for help! Update: As suggested by @maniat1k, here is current output of fdisk -l: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 312581807 156290903+ ee GPT

    Read the article

  • Ubuntu 12.04LTS mountall: Disconnected from Plymouth

    - by user169954
    I have ubuntu 12.04LTS 64bit running on an i5 dual core 8G RAM. On startup I get the message mountall: Disconnected from Plymouth [OK] And the system looks stuck. However, if I go to tty1, then I can login and startx and everything seems to be fine except for being a bit sluggish. I can verify that my nfs mounts are ok, and that my swap is ok. Every time I reboot the system there is a _gdm_gdm_crash file in my /var/crash, which makes me think my problem is rooted in gdm, X configs and/or nvidia drivers. A bit of background in case it's relevant: 3 hours ago my desktop crashed. Following various 'tips' on the web I made a complete mess of my X server and X configuration files, and at one point I even had to recreate my swap partition. Anyway, after much struggle I managed to get to the state I mentioned above: I have a working system provided I always login through tty1. What is this Plymouth anyway? Would it make a difference if I used gnome-wm instead of gdm or lightdm? (I mean to the startup, not to me :-) What bit of config do I change to tell startx to use gnome-wm not gdm or ligthdm? Thank you in advance

    Read the article

  • SQL SERVER – Generate Report for Index Physical Statistics – SSMS

    - by pinaldave
    Few days ago, I wrote about SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS (Link). A user asked me a question regarding if we can use similar reports to get the detail about Indexes. Yes, it is possible to do the same. There are similar type of reports are available at Database level, just like those available at the Server Instance level. You can right click on Database name and click Reports. Under Standard Reports, you will find following reports. Disk Usage Disk Usage by Top Tables Disk Usage by Table Disk Usage by Partition Backup and Restore Events All Transactions All Blocking Transactions Top Transactions by Age Top Transactions by Blocked Transactions Count Top Transactions by Locks Count Resource Locking Statistics by Objects Object Execute Statistics Database Consistency history Index Usage Statistics Index Physical Statistics Schema Change history User Statistics Select the Reports with name Index Physical Statistics. Once click, a report containing all the index names along with other information related to index will be visible, e.g. Index Type and number of partitions. One column that caught my interest was Operation Recommended. In some place, it suggested that index needs to be rebuilt. It is also possible to click and expand the column of partitions and see additional details about index as well. DBA and Developers who just want to have idea about how your index is and its physical statistics can use this tool. Click to Enlarge Note: Please note that I will rebuild my indexes just because this report is recommending it. There are many other parameters you need to consider before rebuilding indexes. However, this tool gives you the accurate stats of your index and it can be right away exported to Excel or PDF writing by clicking on the report. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by joycsharp
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves all major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Employee Info Starter Kit - Visual Studio 2010 and .NET 4.0 Version (4.0.0) Available

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is a ASP.NET based web application, which includes very simple user requirements, where we can create, read, update and delete (crud) the employee info of a company. Based on just a database table, it explores and solves most of the major problems in web development architectural space.  This open source starter kit extensively uses major features available in latest Visual Studio, ASP.NET and Sql Server to make robust, scalable, secured and maintanable web applications quickly and easily. Since it's first release, this starter kit achieved a huge popularity in web developer community and includes 1,40,000+ download from project web site. Visual Studio 2010 and .NET 4.0 came up with lots of exciting features to make software developers life easier.  A new version (v4.0.0) of Employee Info Starter Kit is now available in both MSDN Code Gallery and CodePlex. Chckout the latest version of this starter kit to enjoy cool features available in Visual Studio 2010 and .NET 4.0. [ Release Notes ] Architectural Overview Simple 2 layer architecture (user interface and data access layer) with 1 optional cache layer ASP.NET Web Form based user interface Custom Entity Data Container implemented (with primitive C# types for data fields) Active Record Design Pattern based Data Access Layer, implemented in C# and Entity Framework 4.0 Sql Server Stored Procedure to perform actual CRUD operation Standard infrastructure (architecture, helper utility) for automated integration (bottom up manner) and unit testing Technology UtilizedProgramming Languages/Scripts Browser side: JavaScript Web server side: C# 4.0 Database server side: T-SQL .NET Framework Components .NET 4.0 Entity Framework .NET 4.0 Optional/Named Parameters .NET 4.0 Tuple .NET 3.0+ Extension Method .NET 3.0+ Lambda Expressions .NET 3.0+ Aanonymous Type .NET 3.0+ Query Expressions .NET 3.0+ Automatically Implemented Properties .NET 3.0+ LINQ .NET 2.0 + Partial Classes .NET 2.0 + Generic Type .NET 2.0 + Nullable Type   ASP.NET 3.5+ List View (TBD) ASP.NET 3.5+ Data Pager (TBD) ASP.NET 2.0+ Grid View ASP.NET 2.0+ Form View ASP.NET 2.0+ Skin ASP.NET 2.0+ Theme ASP.NET 2.0+ Master Page ASP.NET 2.0+ Object Data Source ASP.NET 1.0+ Role Based Security Visual Studio Features Visual Studio 2010 CodedUI Test Visual Studio 2010 Layer Diagram Visual Studio 2010 Sequence Diagram Visual Studio 2010 Directed Graph Visual Studio 2005+ Database Unit Test Visual Studio 2005+ Unit Test Visual Studio 2005+ Web Test Visual Studio 2005+ Load Test Sql Server Features Sql Server 2005 Stored Procedure Sql Server 2005 Xml type Sql Server 2005 Paging support

    Read the article

  • Installation stuck on "Installation Type" screen

    - by Andrew Latham
    I am trying to install Ubuntu 11.10 with Windows 7 from a CD. I am using an HP Pavilion dm4. I've never used Ubuntu (or any Linux) before. Everything goes alright until I get to the "Installation Type" screen. Instead of giving me options, it just has a blank menu, and all the buttons are disabled. When I click "Continue", it gives me an error saying that it can't find the root or something like that. The trial version works fine, but I can't actually install it. Everything on the trial version is really slow, presumably because everything is on the CD or the Windows partition. I did some research, but the only post I could find was http://ubuntuforums.org/showthread.php?t=1870478 Where the only advice is to format the entire drive, which I'm not willing to do. Any suggestions? I'm downloading 10.04 right now and I'm going to try with that instead. EDIT: 10.04 didn't work either. I got to the partitioning screen and got the same problem. I read some more forums, loaded up 11.10 trial from the disk, opened the Terminal and typed sudo apt-get remove dmraid and then y. Then I was actually able to see something on the "Installation type" page: "Erase disk and install Ubuntu" or "Something else". Which is weird, since Windows 7 should be installed. When I click Something Else, I get: /dev/sda /dev/sdb /dev/sdb1 (ntfs) (208 MB) (69 MB used) /dev/sdb2 (ntfs) (477542 MB) (unknown used) /dev/sdb3 (ntfs) (18085 MB) (16094 MB used) /dev/sdb4 (fat32) (4265 MB) (3084 MB used) I have no idea what any of this means. Also, my device for boot loader installation changed from /dev/sda to /dev/sda ATA SAMSUNG MZMPA032 (32.0 GB)

    Read the article

  • Real Excel Templates I

    - by Tim Dexter
    As promised, I'm starting to document the new Excel templates that I teased you all with a few weeks back. Leslie is buried in 11g documentation and will not get to officially documenting the templates for a while. I'll do my best to be professional and not ramble on about this and that, although the weather here has finally turned and its 'scorchio' here in Colorado today. Maybe our stand of Aspen will finally come into leaf ... but I digress. Preamble These templates are not actually that new, I helped in a small way to develop them a few years back with Excel 'meistress' Shirley for a company that was trying to use the Report Manager(RR) Excel FSG outputs under EBS 12. The functionality they needed was just not there in the RR FSG templates, the templates are actually XSL that is created from the the RR Excel template builder and fed to BIP for processing. Think of Excel from our RTF templates and you'll be there ie not really Excel but HTML masquerading as Excel. Although still under controlled release in EBS they have now made their way to the standlone release and are willing to share their Excel goodness. You get everything you have with hte Excel Analyzer Excel templates plus so much more. Therein lies a question, what will happen to the Analyzer templates? My understanding is that both will come together into a single Excel template format some time in the post-11g release world. The new XLSX format for Exce 2007/10 is also in the mix too so watch this space. What more do these templates offer? Well, you can structure data in the Excel output. Similar to RTF templates you can create sheets of data that have master-detail n relationships. Although the analyzer templates can do this, you have to get into macros whereas BIP will do this all for you. You can also use native XSL functions in your data to manipulate it prior to rendering. BP functions are not currently supported. The most impressive, for me at least, is the sheet 'bursting'. You can split your hierarchical data across multiple sheets and dynamically name those sheets. Finally, you of course, still get all the native Excel functionality. Pre-reqs You must be on 10.1.3.4.1 plus the latest rollup patch, 9546699. You can patch upa BIP instance running with OBIEE, no problem You need Excel 2000 or above to build the templates Some patience - there is no Excel template builder for these new templates. So its all going to have to be done by hand. Its not that tough but can get a little 'fiddly'. You can not test the template from Excel , it has to be deployed and then run. Limitations The new templates are definitely superior to the Analyzer templates but there are a few limitations. Re-grouping is not supported. You can only follow a data hierarchy not bend it to your will unless you want to get into macros. No support for BIP functions. The templates support native XSL functions only. No template builder Getting Started The templates make the use of named cells and groups of cells to allow BIP to find the insertion point for data points. It also uses a hidden sheet to store calculation mappings from named cells to XML data elements. To start with, in the great BIP tradition, we need some sample XML data. Becasue I wanted to show the master-detail output we need some hierarchical data. If you have not yet gotten into the data templates, now is a good time, I wrote a post a while back starting from the simple to more complex. They generate ideal data sets for these templates. Im working with the following data set: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... <LIST_G_DEPT> <EMPLOYEES> Simple enough to follow and bread and butter stuff for an RTF template. Building the Template For an Excel template we need to start by thinking about how we want to render the data. Come up with a sample output in Excel. Its all dummy data, nothing marked up yet with one row of data for each level. I have the department name and then a repeating row for the employees. You can apply Excel formatting to the layout. The total is going to be derived from a data element. We'll get to Excel functions later. Marking Up Cells Next we need to start marking up the cells with custom names to map them to data elements. The cell names need to follow a specific format: For data grouping, XDO_GROUP_?group_name? For data elements, XDO_?element_name? Notice the question mark delimter, the group_name and element_name are case sensitive. The next step is to find how to name cells; the easiest method is to highlight the cell and then type in the name. You can also find the Name Manager dialog. I use 2007 and its available on the ribbon under the Formulas section Go thorugh the process of naming all the cells for the element values you have. Using my data set from above.You should end up with something like this in your 'Name Manager' dialog. You can update any mistakes you might have made through this dialog. Creating Groups In the image above you can see there are a couple of named group cells. To create these its a simple case of highlighting the cells that make up the group and then naming them. For the EMP group, highlight the employee row and then type in the name, XDO_GROUP?G_EMP? Notice the 10,000 total is outside of the G_EMP group. Its actually named, XDO_?TOTAL_SALARY?, a query calculated value. For the department group, we need to include the department name cell and the sub EMP grouping and name it, XDO_GROUP?G_DEPT? Notice, the 10,000 total is included in the G_DEPT group. This will ensure it repeats at the department level. Lastly, we do need to include a special sheet in the workbook. We will not have anything meaningful in there for now, but it needs to be present. Create a new sheet and name it XDO_METADATA. The name is important as the BIP rendering engine will looking for it. For our current example we do not need anything other than the required stuff in our XDO_METADATA sheet but, it must be present. Easy enough to hide it. Here's what I have: The only cell that is important is the 'Data Constraints:' cell. The rest is optional. To save curious users getting distracted, hide the metadata sheet. Deploying & Running Templates We should now have a usable Excel template. Loading it into a report is easy enough using the browser UI, just like an RTF template. Set the template type to Excel. You will now be able to run the report and hopefully get something like this. You will not get the red highlighting, thats just some conditional formatting I added to the template using Excel functionality. Your dates are probably going to look raw too. I got around this for now using an Excel function on the cell: =--REPLACE(SUBSTITUTE(E8,"T"," "),LEN(E8)-6,6,"") Google to the rescue on that one. Try some other stuff out. To avoid constantly loading the template through the UI. If you have BIP running locally or you can access the reports repository, once you have loaded the template the first time. Just save the template directly into the report folder. I have put together a sample report using a sample data set, available here. Just drop the xml data file, EmpbyDeptExcelData.xml into 'demo files' folder and you should be good to go. Thats the basics, next we'll start using some XSL functions in the template and move onto the 'bursting' across sheets.

    Read the article

  • Can't start mysql - mysql respawning too fast, stopped

    - by Tom
    Today I did a fresh install of ubuntu 12.04 and went about setting up my local development environment. I installed mysql and edited /etc/mysql/my.cnf to optimise InnoDB but when I try to restart mysql, it fails with a error: [20:53][tom@Pochama:/var/www/website] (master) $ sudo service mysql restart start: Job failed to start The syslog reveals there is a problem with the init script: > tail -f /var/log/syslog Apr 28 21:17:46 Pochama kernel: [11840.884524] type=1400 audit(1335644266.033:184): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=760 comm="apparmor_parser" Apr 28 21:17:47 Pochama kernel: [11842.603773] init: mysql main process (764) terminated with status 7 Apr 28 21:17:47 Pochama kernel: [11842.603841] init: mysql main process ended, respawning Apr 28 21:17:48 Pochama kernel: [11842.932462] init: mysql post-start process (765) terminated with status 1 Apr 28 21:17:48 Pochama kernel: [11842.950393] type=1400 audit(1335644268.101:185): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=811 comm="apparmor_parser" Apr 28 21:17:49 Pochama kernel: [11844.656598] init: mysql main process (815) terminated with status 7 Apr 28 21:17:49 Pochama kernel: [11844.656665] init: mysql main process ended, respawning Apr 28 21:17:50 Pochama kernel: [11845.004435] init: mysql post-start process (816) terminated with status 1 Apr 28 21:17:50 Pochama kernel: [11845.021777] type=1400 audit(1335644270.173:186): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=865 comm="apparmor_parser" Apr 28 21:17:51 Pochama kernel: [11846.721982] init: mysql main process (871) terminated with status 7 Apr 28 21:17:51 Pochama kernel: [11846.722001] init: mysql respawning too fast, stopped Any ideas? Things I tried already: I googled and found a Ubuntu bug with apparmor (https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/970366), I changed apparmor from enforce mode to complain mode: sudo apt-get install apparmor-utils sudo aa-complain /usr/sbin/mysqld sudo /etc/init.d/apparmor reload but it didn't help. I still can't start mysql. I also thought the issue may be because the InnoDB logfiles were a different size than mysql was expecting. I removed the innodb log files before restarting using: sudo mv /var/lib/mysql/ib_logfile* /tmp. No luck though. Workaround: I re-installed 12.04, made sure not to touch /etc/mysql/my.cnf in any way. Mysql is working so I can get on with what I need to do. But I will need to edit it at some point - Hopefully I'll have figured out a solution, or this question will have been answered by that point...

    Read the article

  • Silverlight 4 Training Kit

    - by ScottGu
    We recently released a new free Silverlight 4 Training Kit that walks you through building business applications with Silverlight 4.  You can browse the training kit online or alternatively download an entire offline version of the training kit.  The training material is structured on teaching how to use the new Silverlight 4 features to build an end to end business application. The training kit includes 8 modules, 25 videos, and several hands on labs. Below is a breakdown and links to all of the content. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Module 1: Introduction Click here to watch this module. In this video John Papa and Ian Griffiths discuss the key areas that the Building Business Applications with Silverlight 4 course focuses on. This module is the overview of the course and covers many key scenarios that are faced when building business applications, and how Silverlight can help address them. Module 2: WCF RIA Services Click here to explore this module. In this lab, you will create a web site for managing conferences that will be the basis for the other labs in this course. Don’t worry if you don’t complete a particular lab in the series – all lab manual instructions are accompanied by completed solutions, so you can either build your own solution from start to finish, or dive straight in at any point using the solutions provided as a starting point. In this lab you will learn how to set up WCF RIA Services, create bindings to the domain context, filter using the domain data source, and create domain service queries. Online Link Download Source Download Lab Document Videos Module 2.1 - WCF RIA Services Ian Griffiths sets up the Entity Framework and WCF RIA Services for the sample Event Manager application for the course. He covers how to set up the services, how the Domain Services work and the role that the DomainContext plays in the sample application. He also reviews the metadata classes and integrating the navigation framework. Module 2.2 – Using WCF RIA Services to Edit Entities Ian Griffiths discusses how he adds the ability to edit and create individual entities with the features built into WCF RIA Services into the sample Event Manager application. He covers data binding fundamentals, IQueryable, LINQ, the DomainDataSource, navigation to a single entity using the navigation framework, and how to use the Visual Studio designer to do much of the work . Module 2.3 – Showing Master/Details Records Using WCF RIA Services Ian Griffiths reviews how to display master/detail records for the sample Event Manager application using WCF RIA Services. He covers how to use the Include attribute to indicate which elements to serialize back to the client. Ian also demonstrates how to use the Data Sources window in the designer to add and bind controls to specific data elements. He wraps up by showing how to create custom services to the Domain Services. Module 3 – Authentication, Validation, MVVM, Commands, Implicit Styles and RichTextBox Click here to visit this module. This lab demonstrates how to build a login screen, integrate ASP.NET authentication, and perform validation on data elements. Model-View-ViewModel (MVVM) is introduced and used in this lab as a pattern to help separate the UI and business logic. You will also learn how to use implicit styling and the new RichTextBox control. Online Link Download Source Download Lab Document Videos Module 3.1 – Authentication Ian Griffiths covers how to integrate a login screen and authentication into the sample Event Manager application. Ian shows how to use the ASP.NET authentication and integrate it into WCF RIA Services and the Silverlight presentation layer. Module 3.2 – MVVM Ian Griffiths covers how to Model-View-ViewModel (MVVM) patterns into the sample Event Manager application. He discusses why MVVM exists, what separated presentation means, and why it is important. He shows how to connect the View to the ViewModel, why data binding is important in this symbiosis, and how everything fits together in the overall application. Module 3.3 –Validation Ian Griffiths discusses how validation of user input can be integrated into the sample Event Manager application. He demonstrates how to use the DataAnnotations, the INotifyDataErrorInfo interface, binding markup extensions, and WCF RIA Services in concert to achieve great validation in the sample application. He discusses how this technique allows for property level validation, entity level validation, and asynchronous server side validation. Module 3.4 – Implicit Styles Ian Griffiths discusses how why implicit styles are important and how they can be integrated into the sample Event Manager application. He shows how implicit styles defined in a resource dictionary can be applied to all elements of a particular kind throughout the application. Module 3.5 – RichTextBox Ian Griffiths discusses how the new RichTextBox control and it can be integrated into the sample Event Manager application. He demonstrates how the RichTextBox can provide editing for the event information and how it can display the rich text for selection and copying. Module 4 – User Profiles, Drop Targets, Webcam and Clipboard Click here to visit this module. This lab builds new features into the sample application to take the user's photo. It teaches you how to use the webcam to capture an image, use Silverlight as a drop target, and take advantage of programmatic access to the clipboard. Link Download Source Download Lab Document Videos Module 4.1 – Webcam Ian Griffiths demonstrates how the webcam adds value to the sample Event Manager application by capturing an image of the attendee. He discusses the VideoCaptureDevice, the CaptureDviceConfiguration, and the CaptureSource classes and how they allow audio and video to be captured so you can grab an image from the capture device and save it. Module 4.2 - Drag and Drop in Silverlight Ian Griffiths demonstrates how to capture and handle the Drop in the sample Event Manager application so the user can drag a photo from a file and drop it into the application. Ian reviews the AllowDrop property, the Drop event, how to access the file that can be dropped, and the other drag related events. He also reviews how to make this work across browsers and the challenges for this. Module 5 – Schedule Planner and Right Mouse Click Click here to visit this module. This lab builds on the application to allow grouping in the DataGrid and implement right mouse click features to add context menu support. Link Download Source Download Lab Document Videos Module 5.1 – Grouping and Binding Ian Griffiths demonstrates how to use the grouping features for data binding in the DataGrid and how it applies to the sample Event Manager application. He reviews the role of the CollectionViewSource in grouping, customizing the templates for headers, and how to work with grouping with ItemsControls. Module 5.2 – Layout Visual States Ian Griffiths demonstrates how to use the Fluid UI animation support for visual states in the ListBox control DataGrid and how it applies to the sample Event Manager application. He reviews the 3 visual states of BeforeLoaded, AfterLoaded, and BeforeUnloaded. Module 5.3 – Right Mouse Click Ian Griffiths demonstrates how to add support for handling the right mouse button click event to display a context menu for the Event Manager application. He demonstrates how to handle the event, show a custom context menu control, and integrate it into the scheduling portion of the application. Module 6 – Printing the Schedule Click here to visit this module. This lab teaches how to use the new printing features in Silverlight 4. The lab walks through the PrintDocument class and the ViewBox control, while showing how to print multiple pages of content using them. Link Download Source Download Lab Document Videos Module 6.1 – Printing and the Viewbox Ian Griffiths demonstrates how to add the ability to print the schedule to the sample Event Manager application. He walks through the importance of the PrintDocument class and its members. He also shows how to handle printing the visual tree and how the ViewBox control can help. Module 6.2 – Multi Page Printing Ian Griffiths expands on his printing discussion by showing how to handle printing multiple pages of content for the sample Event Manager application. He shows how to paginate the content and points out various tips to keep in mind when determining the printable area. Module 7 – Running the Event Dashboard Out of Browser Click here to visit this module. This lab builds a dashboard for the sample application while explaining the fundamentals of the out of browser features, how to handle authentication, displaying notifications (toasts), and how to use native integration to use COM Interop with Silverlight. Link Download Source Download Lab Document Videos Module 7.1 – Out of Browser Ian Griffiths discusses the role of an Out of Browser application for administrators to manage the events and users in the sample Event Manager application. He discusses several reasons why out of browser applications may better suit your needs including custom chrome, toasts, window placement, cross domain access, and file access. He demonstrates the basic technique to take your application and make it work out of browser using the tools. Module 7.2 – NotificationWindow (Toasts) for Elevated Trust Out of Browser Applications Ian Griffiths discusses the how toasts can be used in the sample Event Manager application to show information that may require the user's attention. Ian covers how to create a toast using the NotificationWindow, security implications, and how to make the toast appear as needed. Module 7.3 – Out of Browser Window Placement Ian Griffiths discusses the how to manage the window positioning when building an out of browser application, handling the windows state, and controlling and handling activation of the window. Module 7.4 – Out of Browser Elevated Trust Application Overview Ian Griffiths discusses the implications of creating trusted out of browser application for the Event Manager sample application. He reviews why you might want to use elevated trust, what features is opens to you, and how to take advantage of them. Topics Ian covers include the dynamic keyword in C# 4, the AutomationFactory class, the API to check if you are in a trusted application, and communicating with Excel. Module 8 – Advanced Out of Browser and MEF Click here to visit this module. This hands-on lab walks through the creation of a trusted out of browser application and the new functionality that comes with that. You will learn to use COM Automation, handle the window closing event, set custom window chrome, digitally sign your Silverlight out of browser trusted application, create a silent install option, and take advantage of MEF. Link Download Source Download Lab Document Videos Module 8.1 – Custom Window Chrome for Elevated Trust Out of Browser Applications Ian Griffiths discusses how to replace the standard operating system window chrome with customized chrome for an elevated trusted out of browser application. He covers how it is important to handle close, resize, minimize, and maximize events. Ian mentions that the tooling was not ready when he shot this video, but the good news is that the tooling now supports setting the custom chrome directly from the property page for the Silverlight application. Module 8.2 – Window Closing Event for Out of Browser Applications Ian Griffiths discusses the WindowClosing event and how to handle and optionally cancel the event. Module 8.3 – Silent Install of Out of Browser Applications Ian Griffiths discusses how to use the SLLauncher executable to install an out of browser application. He discusses the optional command line switches that can be set including how the emulate switch can help you emulate the install process. Ian also shows how to setup a shortcut for the application and tell the application where it should look for future updates online. Module 8.4 – Digitally Signing Out of Browser Application Ian Griffiths discusses how and why to digitally sign an out of browser application using the signtool program. He covers what trusted certificates are, the implications of signing (or not signing), and the effect on the user experience. Module 8.5 – The Value of MEF with Silverlight Ian Griffiths discusses what MEF is, how your application can benefit from it, and the fundamental features it puts at your disposal. He covers the 3 step import, export and compose process as well as how to dynamically import XAP files using MEF. Summary As you can probably tell from the long list above – this series contains a ton of great content, and hopefully provides a nice end-to-end walkthrough that helps explain how to take advantage of Silverlight 4 (and all its new features).  Hope this helps, Scott

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • PDC and Tech-Ed Europe Slides and Code

    - by Stephen Walther
    I spent close to three weeks on the road giving talks at Tech-Ed Europe (Berlin), PDC (Los Angeles), and the Los Angeles Code Camp (Los Angeles). I got to talk about two topics that I am very passionate about: ASP.NET MVC and Ajax. Thanks everyone for coming to all my talks! At PDC, I announced all of the new features of our ASP.NET Ajax Library. In particular, I made five big announcements: ASP.NET Ajax Library Beta Released – You can download the beta from Ajax.CodePlex.com ASP.NET Ajax Library includes the AJAX Control Toolkit – You can use the Ajax Control Toolkit with ASP.NET MVC. ASP.NET Ajax Library being contributed to the CodePlex Foundation – ASP.NET Ajax is the founding project for the CodePlex Foundation (see CodePlex.org) ASP.NET Ajax Library is receiving full product support – Complain to Microsoft Customer Service at midnight on Christmas ASP.NET Ajax Library supports jQuery integration – Use (almost) all of the Ajax Control Toolkit controls in jQuery For more details on the Ajax announcements, see James Senior’s blog entry on the Ajax announcements at: http://jamessenior.com/post/News-on-the-ASPNET-Ajax-Library.aspx In my MVC talks, I discussed the new features being introduced with ASP.NET MVC 2. Here are three of my favorite new features: Client Validation – Client validation done the right way. Do your validation in your model and let the validation bubble up to JavaScript code automatically. Areas – Divide your ASP.NET MVC application into sub-applications. Great for managing both medium and large projects. RenderAction() – Finally, a way to add content to master pages and multiple pages without doing anything strange or twisted. There are demos of all of these features in the MVC downloads below. Here are the power point and code from all of the talks: PDC – Introducing the New ASP.NET Ajax Library PDC – ASP.NET MVC: The New Stuff Tech-Ed Europe - What's New in Microsoft ASP.NET Model-View-Controller Tech-Ed Europe - Microsoft ASP.NET AJAX: Taking AJAX to the Next Level

    Read the article

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >