Search Results

Search found 3906 results on 157 pages for 'guid generation'.

Page 105/157 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • Oracle Forms: Walking the path to FMW Platform – webcast September 24th 2012

    - by JuergenKress
    The next 5 year Strategy Preparing for the Next Generation Applications Oracle Forms, a component of Oracle Fusion Middleware, is Oracle's long-established technology to design and build enterprise applications quickly and efficiently. Oracle remains committed to the development of this technology, and to the ongoing release as a component of the Oracle platform. This continuing commitment to Forms technology enables you to leverage your existing investment by easily upgrading and integrating existing Oracle Forms applications to take advantage of web technologies and service oriented architectures (SOA). For more information please visit our Forms OTN page. Agenda Why update? – New business imperatives What is the path? Why walk it with Oracle? Support Lifetime – upgrade to updates Summary Audience Enterprise & Solution Architects R&D leaders Project Managers and Project Leaders Delivery Format This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Duration 1 hour Forms: Walking the path to FMW September 24th, 2012, 9am BST Register Here! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Forms,PTS,future of forms,forms roadmap,forms soa,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How can I disable the purple bootloader splash at boot?

    - by wim
    This question has been answered before, but neither of the methods in the accepted answer worked for me on 11.10. First I tried editing in /etc/default/grub, and then running sudo update-grub. But after that I still got a blank, plain, purple screen while the kernel is loading. The screen has no boot options, and it obscures those dmesg that I want to see going in the terminal. Next I tried removing the plymouth-theme-*, but that just broke my gnome-shell theme looks, and the purple screen still remains. I have also tried configuring it with startupmanager package, but nothing seems to get rid of that darned purple splash. here are the contents of my /etc/default/grub file: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT="0" GRUB_HIDDEN_TIMEOUT="0" GRUB_HIDDEN_TIMEOUT_QUIET="true" GRUB_TIMEOUT="0" GRUB_DISTRIBUTOR="`lsb_release -i -s 2> /dev/null || echo Debian`" #GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) # GRUB_TERMINAL="console" # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE="640x480" # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID="true" # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" GRUB_DISABLE_OS_PROBER="true"

    Read the article

  • Ubuntu 12.04 hangs on install using VMware Player on Windows 7

    - by Beauness_Round
    At my job, we're migrating to Ubuntu 12.04 from 10.10 (we skipped a generation), so everyone needs to make one or more new Virtual Machines for 12.04 on their Windows 7 workstation. The problem is, only some have had success while most encounter a hang during install. I was tasked with researching this problem and here are my findings (while somewhat inconclusive) I have tried uninstalling and reinstalling VMware Player, using versions 4.0.2, 4.0.4, and 5.0.0 but every install hangs at "retrieving file 55 of 129." Additionally I have installed before updating the VMware tools, as well as after (one can pre-download them) with every install hanging at "retrieving file 55 of 129." The file i was using for the above two is called: ubuntu-12.04-desktop-amd64.iso I have burned a disc with ubuntu-12.04.1-desktop-amd64.iso and tried to install it, and later tried ubuntu-12.04.1-desktop-i386.iso (which is not what i want but i wanted to see if it would work), but both times I get "PXE-E53: No boot filename recieved PXE-M0F: Exiting Intel PXE ROM. Operating System not found" This is all in VMware and the goal is to find a settings recipe that will allow an unbroken 64bit Ubuntu 12.04 install from an .iso hosted on the Windows OS. Thank you in advance to anyone with ideas!

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • WIFI/LAN not working after Installing Ubuntu 13.10

    - by user183025
    I can't connect to my WIFI after installing Ubuntu 13.10. I tried re-installing it three times (thinking that I did something wrong during the installation) but I still can't connect to my WIFI. It just gives me the message that I'm offline and that I can't connect to my WIFI. I also tried connecting to the internet using LAN cable but with the same results. I tried google but it seems that there's no solution to this yet.... Anybody knows how to solve this? Thanks! FYI: H/W path Device Class Description ====================================================== system RC530/RC730 (To be filled by O.E.M.) /0 bus RC530/RC730 /0/0 memory 64KiB BIOS /0/4 processor Intel(R) Core(TM) i5-2410M CPU @ 2.30GHz /0/4/5 memory 32KiB L1 cache /0/41 memory 4GiB System Memory /0/41/0 memory DIMM [empty] /0/41/1 memory 4GiB SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) /0/100 bridge 2nd Generation Core Processor Family DRAM Controller 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 130 (rev 34) Subsystem: Intel Corporation Centrino Wireless-N 130 BGN Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f7200000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) Subsystem: Samsung Electronics Co Ltd Device c0c1 Flags: bus master, fast devsel, latency 0, IRQ 41 I/O ports at b000 [size=256] Memory at f2104000 (64-bit, prefetchable) [size=4K] Memory at f2100000 (64-bit, prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169 Kernel modules: r8169

    Read the article

  • ACT On' OVCA for Cloud Providers Program Launch Webcast: June 12, 2014 - 9am UKT / 10am CET / 11am EET

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE We invite you to join the OVCA for Cloud Providers ‘ACT On' program launch at 11am BST / 12noon CET on June 12. · More and more customers realize the value of shifting to a Converged IT Infrastructure, this is why IDC expects this market to grow 40% annually for the next 2 years. · The Oracle Virtual Compute Appliance (OVCA) with attached ZFS storage is the perfect answer to this market trend. By providing rapid application and cloud deployment, OVCA allows customers to cut capital expenditures by up to 50% and deploy key applications up to 7x faster. · For Partners, OVCA supports their journey to consolidation, virtualization and cloud, and allows them to sell higher value services to their customers. The objective of this webcast is to share with you the OVCA value proposition, help you identify the best target partners, and provide you with the Enablement and Demand Generation content and resources. To register and for further details click here /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Dell Inspiron7520 and ubntu 12.04 issues

    - by user91358
    I have a DELL Inspiron 7520 in the highest configuration: 3rd Generation Intel® Core™ i7-3612QM processor (6M Cache, up to 3.1 GHz) 15.6" Full High Definition (1080p) LED Display 8GB3 Dual Channel DDR3 SDRAM at 1600MHz 1TB 5400RPM SATA HDD + 32GB mSATA SSD w/Intel Smart Response Blu-ray Disc (BD) Combo (Reads BD and Writes to DVD/CD) AMD Radeon™ HD 7730M 2GB 6.09 lbs and I have installed Ubuntu 12.04 few days ago and I'm facing some issues: 1) sometimes the whole ntb freezes and I have to hold power button for 5 secs to shut it down. I think it is something with VGA and connected external monitor. I have read somewhere that it is already a reported bug, but what I am not sure about that it is doing sporadically. Sometimes it freezes right after I log in, sometime I ran few hours and then it freezes. I am using those proprietary drivers but I wasn't been able to install those with updates. 2) the next issue is the fan is quite noisy even when the ntb is almost Idle. (max 10% CPU usage). Can you recommend me some software which could do this power management to lower the noise? I have tried CPU frequency scaling indicator, but it seems that it has not any effects. 3) and issue no. 3: when I want to log out, restart or shutdown using the menu in upper right corner the upper and left trays disappear, but programs are still running and they won't close to complete log out or shutting down the OS. When I use the CLI command, it works fine. Thanks for any help you can provide.

    Read the article

  • How to load chunks of 2d map segments when player reaches a certain point?

    - by 2kan
    In my 2d platformer (made with Java and Slick2d), random maps are made by combining different segments together and displaying them one after the other. My problem is that I can't load too many segments or the game will run out of memory, so I want to load n number of segments at a time in chunks, then load the next chunk when the player comes near the end of one. I've attempted to do this for a couple of hours now, but I just can't get it to work at all. This is my chunk generation function where chunkLoad is the number of segments to load and BLOCK_WIDTH is the number of blocks/tiles each segment is across. Chunk1 and map are arrays of segments. Random r = new Random(); for(int i=0; i<chunkLoad; i++) { int id = r.nextInt(4)+2; chunk1[i] = new BlockMap("res/window/map"+id+".tmx", i*BLOCK_WIDTH); } map = chunk1; chunksLoaded++; The map is then drawn on the screen like this. tmap is a TiledMap object and each block/tile is 16 pixels wide for(int i=0; i<chunkLoad; i++) { map[i].tmap.render((i * BLOCK_WIDTH * 16) + (cameraX), 0); } I can successfully load new chunks, but I can't display them in the correct position, nor the hitboxes. Any suggestions? Thanks.

    Read the article

  • Partner Spotlight: Compasso

    - by Joe Diemer
    EVERY MONDAY at 2 PM Eastern Time (USA) Join Compasso, one of Oracle's Enterprise Manager partners that is Specialized in Application Quality Management, for a series of presentations focused on Oracle end to end performance management. The objective is to provide a comprehensive overview of Oracle's Enterprise Manager 12c software.  Attendees will learn how Oracle can provide a complete end to end management  solution, for not only for Oracle Databases, but also other aspects of the Oracle technology, such as Middleware and Oracle applications.  Key areas include: 1. Next-Generation Management Framework which cover: Better performance and scalability Modular, extensible architecture Easier to manage and diagnose Enhanced security Web 2.0 UI 2. Enhanced Application-to-Disk Management End-to-end application performance management Fusion Applications management Exadata and Exalogic management 3. Complete Lifecycle Management for Enterprise Private Cloud Self-service provisioning Policy based resource and workload management Chargeback There are two ways you can sign up: 1. Call Nicole Wakefield of Compasso at +1 (971) 245-5042 2. Email [email protected] with Subject Line of "Monday Enterprise Manager Presentations" WebEx Conference details will be provided with your confirmation For more information about partner opportunities with Enterprise Manager, including becoming Specialized, click here. For more information about Compasso, click here.

    Read the article

  • Is it normal for GRUB to take some time (15+s) after I choose what to boot?

    - by zarnaik
    I had been planning to change the background of my bootloader for a while, and finally got to it. Now the black screen I get for quite some time was made clear. It is still GRUB, because the background image stays, while all of the text is gone. Then it just simply shows the Lubuntu loading screen for, usually, not more than 3 seconds. I run Lubuntu 12.10. My question is, is this normal behaviour or is something going wrong, causing GRUB to take longer? Here are the contents of my grub file located at /etc/default/ : # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_BACKGROUND="/usr/share/lubuntu/wallpapers/1210-Windmill_by_Ferran_Reyes.png" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" If you need any other information please tell me and I'll do my best to provide it. :)

    Read the article

  • Can't remove GPT data from MBR

    - by user2373121
    I am having difficulty getting the Ubuntu installer (and gparted) to recognize the partitions on my MBR type disk. Other operating systems and disk tools read the disk structure and the files on it fine. I have used fixparts to write a new MBR but the issue persists. I assume the issue stems from the Protective MBR data still present on the disk but I am at a loss as to how to remove it while preserving my NTFS data partition. Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixpartsfixparts 3: FixParts 0.8.8 Loading MBR data from 3: Warning: 0xEE partition doesn't start on sector 1. This can cause problems in some OSes. MBR command (? for help): Running gdisk shows Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. c:\Users\mike\Desktop\fixparts>gdisk 3: GPT fdisk (gdisk) version 0.8.7 Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by typing 'q' if you don't want to convert your MBR partitions to GPT format! *************************************************************** ************************************************************************ Most versions of Windows cannot boot from a GPT disk, and most varieties prior to Vista cannot read GPT disks. Therefore, you should exit now unless you understand the implications of converting MBR to GPT or creating a new GPT disk layout! ************************************************************************ Are you SURE you want to continue? (Y/N): y Command (? for help): p Disk 3:: 2930277168 sectors, 1.4 TiB Logical sector size: 512 bytes Disk identifier (GUID): BFE92CE8-F93D-4141-82B8-816AD06FB36E Partition table holds up to 128 entries First usable sector is 34, last usable sector is 2930277134 Partitions will be aligned on 2048-sector boundaries Total free space is 163846893 sectors (78.1 GiB) Number Start (sector) End (sector) Size Code Name 1 163842048 2930272255 1.3 TiB 0700 Microsoft basic data Command (? for help): r Recovery/transformation command (? for help): o Disk size is 2930277168 sectors (1.4 TiB) MBR disk identifier: 0x00000000 MBR partitions: Number Boot Start Sector End Sector Status Code 1 1 2930277167 primary 0xEE Recovery/transformation command (? for help): q

    Read the article

  • Reverse-engineer SharePoint fields, content types and list instance—Part3

    - by ybbest
    Reverse-engineer SharePoint fields, content types and list instance—Part1 Reverse-engineer SharePoint fields, content types and list instance—Part2 Reverse-engineer SharePoint fields, content types and list instance—Part3 In Part 1 and Part 2 of this series, I demonstrate how to reverse engineer SharePoint fields, content types. In this post I will cover how to include lookup fields in the content type and create list instance using these content types. Firstly, I will cover how to create list instance and bind the custom content type to the custom list. 1. Create a custom list using list Instance item in visual studio and select custom list. 2. In the feature receiver add the Department content type to Department list and remove the item content type. C# AddContentTypeToList(web, “Department”, ” Department”); private void AddContentTypeToList(SPWeb web,string listName, string contentTypeName) { SPList list = web.Lists.TryGetList(listName); list.OnQuickLaunch = true; list.ContentTypesEnabled = true; list.Update(); SPContentType employeeContentType = web.ContentTypes[contentTypeName]; list.ContentTypes.Add(employeeContentType); list.ContentTypes["Item"].Delete(); list.Update(); } Next, I will cover how to create the lookup fields. The difference between creating a normal field and lookup fields is that you need to create the lookup fields after the lists are created. This is because the lookup fields references fields from the foreign list. 1. In your solution, you need to create a feature that deploys the list before deploying the lookup fields. 2. You need to write the following code in the feature receiver to add the lookup columns in the ContentType. C# //add the lookup fields SPFieldLookup departmentField = EnsureLookupField(currentWeb, “YBBESTDepartment”, currentWeb.Lists["DepartmentList"].ID, “Title”); //add to the content types SPContentType employeeContentType = currentWeb.ContentTypes["Employee"]; //Add the lookup fields as SPFieldLink employeeContentType.FieldLinks.Add(new SPFieldLink(departmentField)); employeeContentType.Update(true); private static SPFieldLookup EnsureLookupField(SPWeb currentWeb, String sFieldName, Guid LookupListID, String sLookupField) { //add the lookup fields SPFieldLookup lookupField = null; try { lookupField = currentWeb.Fields[sFieldName] as SPFieldLookup; } catch (Exception e) { } if (lookupField == null) { currentWeb.Fields.AddLookup(sFieldName, LookupListID, true); currentWeb.Update(); lookupField = currentWeb.Fields[sFieldName] as SPFieldLookup; lookupField.LookupField = sLookupField; lookupField.Group = “YBBEST”; lookupField.Required = true; lookupField.Update(); } return lookupField; }

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • How to Add Control Panel to “My Computer” in Windows 7 or Vista

    - by The Geek
    Back in the Windows XP days, you could easily add Control Panel to My Computer with a simple checkbox in the folder view settings. Windows 7 and Vista don’t make this quite as easy, but there’s still a way to get it back. To make this tweak, we’ll be doing a quick registry hack, but there’s a downloadable version provided as well. Manual Registry Tweak to Add Control Panel Open up regedit.exe through the start menu search or run box, and then browse down to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace Now that you’re there, you’ll need to right-click and create a new key… If you want to add the regular Control Panel view, with the categories, you’ll need to use one GUID as the name of the key. If you want the icon view instead, you can use the other key. Here they are: Category View:  {26EE0668-A00A-44D7-9371-BEB064C98683} Icon View: {21EC2020-3AEA-1069-A2DD-08002B30309D} Once you’re done, it should look like this: Now over in the Computer view, just hit the F5 key to refresh the panel, and you should see the new icon pop up in the list: Now when you click on the icon you’ll be taken to Control Panel. If you didn’t know how to change the view before, you can use the drop-down box on the right-hand side to switch between Category and icon view. Downloadable Registry Hack Rather than deal with manual registry editing, you can simply download the file, extract it, and then either double-click on the AddCategoryControlPanel.reg to add the Category view icon, or AddIconControlPanel.reg to add the other icon. There’s an uninstall script provided for each. Download ControlPanelMyComputer Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaHow To Figure Out Your PC’s Host Name From the Command PromptRestore Missing Desktop Icons in Windows 7 or VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineAdd Registry Editor to Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Accessing Repositories from Domain

    - by Paul T Davies
    Say we have a task logging system, when a task is logged, the user specifies a category and the task defaults to a status of 'Outstanding'. Assume in this instance that Category and Status have to be implemented as entities. Normally I would do this: Application Layer: public class TaskService { //... public void Add(Guid categoryId, string description) { var category = _categoryRepository.GetById(categoryId); var status = _statusRepository.GetById(Constants.Status.OutstandingId); var task = Task.Create(category, status, description); _taskRepository.Save(task); } } Entity: public class Task { //... public static void Create(Category category, Status status, string description) { return new Task { Category = category, Status = status, Description = descrtiption }; } } I do it like this because I am consistently told that entities should not access the repositories, but it would make much more sense to me if I did this: Entity: public class Task { //... public static void Create(Category category, string description) { return new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; } } The status repository is dependecy injected anyway, so there is no real dependency, and this feels more to me thike it is the domain that is making thedecision that a task defaults to outstanding. The previous version feels like it is the application layeer making that decision. Any why are repository contracts often in the domain if this should not be a posibility? Here is a more extreme example, here the domain decides urgency: Entity: public class Task { //... public static void Create(Category category, string description) { var task = new Task { Category = category, Status = _statusRepository.GetById(Constants.Status.OutstandingId), Description = descrtiption }; if(someCondition) { if(someValue > anotherValue) { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.UrgentId); } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.SemiUrgentId); } } else { task.Urgency = _urgencyRepository.GetById (Constants.Urgency.NotId); } return task; } } There is no way you would want to pass in all possible versions of Urgency, and no way you would want to calculate this business logic in the application layer, so surely this would be the most appropriate way? So is this a valid reason to access repositories from the domain?

    Read the article

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • Ubuntu Install 11.10 doesn't recognize Windows 7 installation with new HDD

    - by arlendo
    Replaced my crashed HDD with a Seagate 2TB Sata (bought from a company who pulled it from a working computer, OS unknown) and did a fresh install of Windows 7. Windows shows 100MB boot partition (bootable NTFS) and 200GB Windows partition (NTFS), the rest is unallocated. Win7 Disk Management says the partitioning type is Master Boot Record. Win7 boots and runs fine. Ubuntu 11.10 Install procedes to Allocate Drive Space screen and should say This computer currently has Windows 7 on it. What would you like to do? Instead, it says something like Install doesn't detect any existing OS on this computer. When I click on Something else, the partition table shows only the unallocated space of 1.8TB. Ubuntu Disk Utility says Partitioning: Master Boot Record, but GParted Live says Partition Table: gpt. It was my original intention to have the Windows boot partition and application partition, then install Ubuntu 11.10 using boot, root, swap, and home partitions, and maybe another partition just for data (mostly photos). Currently, I would be happy if I could just get Ubuntu installed along with Win7. I am aware of the MBR limits of 3 Primary partitions and 1 Extended partition. I suspect that my new HDD is partitioned for GPT and that is why Ubuntu can't see the Win7 installation. Am I on the right track? I was going to use Windows Disk Management to convert GPT to MBR but I only have the one drive on my AMD-64 mini-computer and it says I have to empty the drive of all partitions before I can access the Convert command. And I can't find any bootable software that would allow me to do that conversion. Here is the result of sudo fdisk -l: ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 224 heads, 19 sectors/track, 918004 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd4a68c18 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 419637247 209715200 7 HPFS/NTFS/exFAT ubuntu@ubuntu:~$ Keep in mind that I'm a definite newbie to screwing around with the inner workings of Ubuntu. I previously had Ubuntu 10.04 running with Vista and I don't remember even having to partition anything that wasn't automatic in the install. Thanks for taking a look here. My Win7 is running fine but I miss my Ubuntu.

    Read the article

  • Unauthorized response from Server with API upload

    - by Ethan Shafer
    I'm writing a library in C# to help me develop a Windows application. The library uses the Ubuntu One API. I am able to authenticate and can even make requests to get the Quota (access to Account Admin API) and Volumes (so I know I have access to the Files API at least) Here's what I have as my Upload code: public static void UploadFile(string filename, string filepath) { FileStream file = File.OpenRead(filepath); byte[] bytes = new byte[file.Length]; file.Read(bytes, 0, (int)file.Length); RestClient client = UbuntuOneClients.FilesClient(); RestRequest request = UbuntuOneRequests.BaseRequest(Method.PUT); request.Resource = "/content/~/Ubuntu One/" + filename; request.AddHeader("Content-Length", bytes.Length.ToString()); request.AddParameter("body", bytes, ParameterType.RequestBody); client.ExecuteAsync(request, webResponse => UploadComplete(webResponse)); } Every time I send the request I get an "Unauthorized" response from the server. For now the "/content/~/Ubuntu One/" is hardcoded, but I checked and it is the location of my root volume. Is there anything that I'm missing? UbuntuOneClients.FilesClient() starts the url with "https://files.one.ubuntu.com" UbuntuOneRequests.BaseRequest(Method.{}) is the same requests that I use to send my Quota and Volumes requests, basically just provides all of the parameters needed to authenticate. EDIT:: Here's the BaseRequest() method: public static RestRequest BaseRequest(Method method) { RestRequest request = new RestRequest(method); request.OnBeforeDeserialization = resp => { resp.ContentType = "application/json"; }; request.AddParameter("realm", ""); request.AddParameter("oauth_version", "1.0"); request.AddParameter("oauth_nonce", Guid.NewGuid().ToString()); request.AddParameter("oauth_timestamp", DateTime.Now.ToString()); request.AddParameter("oauth_consumer_key", UbuntuOneRefreshInfo.UbuntuOneInfo.ConsumerKey); request.AddParameter("oauth_token", UbuntuOneRefreshInfo.UbuntuOneInfo.Token); request.AddParameter("oauth_signature_method", "PLAINTEXT"); request.AddParameter("oauth_signature", UbuntuOneRefreshInfo.UbuntuOneInfo.Signature); //request.AddParameter("method", method.ToString()); return request; } and the FilesClient() method: public static RestClient FilesClient() { return (new RestClient("https://files.one.ubuntu.com")); }

    Read the article

  • WAIT-VHUB ? Whats Going On ?

    - by Neeraj Gupta
    I know many of you have been working on Oracle's Exalogic and other Engineered Systems. With partitions enabled now, things have gone multi dimension. But its fun. Isn't it ? While you have some EoIB configurations together with InfiniBand partitions, the VNICs are not coming up and staying in WAIT-VHUB state ?  Chances are that you have forgot to add InfiniBand Gateway switches' Bridge-X port GUIDs to your partition. These must be added as FULL members for EoIB to work properly. VHUB means a virtual hub in EoIB. Bridge-x is the access point for hosts to work over EoIB so thats why it must be a full member in partition. Step 1: Find out the port GUIDs of your bridge-x devices in IB Gateway switch. # showgwports INTERNAL PORTS: --------------- Device   Port Portname  PeerPort PortGUID           LID    IBState  GWState --------------------------------------------------------------------------- Bridge-0  1   Bridge-0-1    4    0x0010e00c1b60c001 0x0002 Active   Up Bridge-0  2   Bridge-0-2    3    0x0010e00c1b60c002 0x0006 Active   Up Bridge-1  1   Bridge-1-1    2    0x0010e00c1b60c041 0x0026 Active   Up Bridge-1  2   Bridge-1-2    1    0x0010e00c1b60c042 0x002a Active   Up Step 2: Add these port GUIDs to the IB partition associated with EoIB. Login to master SM switch for this task. # smpartition start # smpartition add -pkey <PKey> -port <port GUID> -m full # smpartition commit Enjoy ! 

    Read the article

  • How do you automate a SharePoint 2010 deployment?

    - by Enrique Lima
    In the last couple of months SharePoint traffic (consulting, training and speaking) has picked up.  And with that also the requests for deployments.  There are good, great, bad and really bad things around this. But that is for another topic.  However part of the good and great has been the fact of organizations wanting to do a proof of concept deployment (even when WSS or MOSS has been deployed). We can go through a session (Microsoft has the SDPS concept, SharePoint Deployment Planning Services) of discovering what the customer wants to achieve from their investment in the platform and then also proceed to model the solution that would fit their needs.  But it should not stop there.  The next step should be a POC (as many have requested) to test out. Now, on to the meat of this post.  How do I deploy?  While it is a good process to watch and see all of it take place, not many have the time to sit through that.  Even more so, when that has been part of the description of deploying the platform in the sessions mentioned above. I will, though, break it into a deployment for development purposes and a deployment of a farm. Two tools (or scripts) for those two different types of deployment. First, let me address the development environment.  Around the last week in October, Chris Johnson (SharePoint Product Team) announced a SharePoint Easy Setup for Developers.  The kit itself will assist you in installing SharePoint Server (in standalone mode), the tools that go around Visual Studio, Expression Studio and the Office 2010 tools. Here is the link to Chris’ post: http://blogs.msdn.com/b/cjohnson/archive/2010/10/28/announcing-sharepoint-easy-setup-for-developers.aspx The other scenario is the use of a script in assisting you through the deployment of a farm. Now, this is not to override planning.  It should highlight the need for planning even more.  How?  Having your service accounts planned, the structure of the sites and the scale of your deployment.  Enter AutoSPInstaller.  This is a CodePlex project, and the intent behind this is not only to automate the installation but to give some meaning and get some sense out of what goes on during a SharePoint deployment. How?  Take for example the creation of the databases, when we do the initial OOB deployment by using the wizard, more times than not, we leave the names as they are.  How is that a “bad thing”?  Let’s make it a better practice to rename those Databases, and have them take on a name that is not “GUID-ized”. Having a better naming convention will not hurt, on the other hand will allow for consistency. Here is the link to AutoSPInstaller’s site on CodePlex: http://autospinstaller.codeplex.com/

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Best Practices for serializing/persisting String Object Dictionary entities

    - by Mark Heath
    I'm noticing a trend towards using a dictionary of string to object (or sometimes string to string), instead of strongly typed objects. For example, the new Katana project makes heavy use of IDictionary<string,object>. This approach avoids the need to continually update your entity classes/DTOs and the database tables that persist them with new properties. It also avoids the need to create new derived entity types to support new types of entity, since the Dictionary is flexible enough to store any arbitrary properties. Here's a contrived example: class StorageDevice { public int Id { get; set; } public string Name { get; set; } } class NetworkShare : StorageDevice { public string Path { get; set; } public string LoginName { get; set; } public string Password { get; set; } } class CloudStorage : StorageDevice { public string ServerUri { get; set } public string ContainerName { get; set; } public int PortNumber { get; set; } public Guid ApiKey { get; set; } } versus: class StorageDevice { public IDictionary<string, object> Properties { get; set; } } Basically I'm on the lookout for any talks, books or articles on this approach, so I can pick up on any best practices / difficulties to avoid. Here's my main questions: Does this approach have a name? (only thing I've heard used so far is "self-describing objects") What are the best practices for persisting these dictionaries into a relational database? Especially the challenges of deserializing them successfully with strongly typed languages like C#. Does it change anything if some of the objects in the dictionary are themselves lists of strongly typed entities? Should a second dictionary be used if you want to temporarily store objects that are not to be persisted/serialized across a network, or should you use some kind of namespacing on the keys to indicate this?

    Read the article

  • Dual booting windows 8/ubuntu 12.04.2, Grub doesn't appear and machine never boot in ubuntu

    - by black sensei
    i got a new ACER predator AG3620-UR308 which came with windows 8, so i wanted to run ubuntu 12.04.2 on it as a dual booting. To be honest, i've been doing dual booting for a while now so, i did the right thing. the box came with 2TB HDD. so i made 4 partitions with a raw partition just after the windows installation partition I always do manual installation so even if ubuntu didn't detect windows 8, it was ok for me. So i created swap area and finished the installation etc....Grub was install on the only drive there which is sda. After reboot, grub doesn't even come up.So it always boot in windows 8. I did repeat the installation process twice and yield same result. which is weird because this method always works for me so far.Even the laptop am using to write this post is a dual booting windows 7/ mint nadia installed the same way. Is there anything new in windows 8 that i didn't make provision for? Before starting the installation, all i read about was that , windows 8 should be installed first and ubuntu after. I went ahead and disable secure boot from the BIOS and enabled CSM (don't even know what it means) according to Acer custhelp site . I boot from USB and did fdisk -l bellow is the result: ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x8c361cb5 Device Boot Start End Blocks Id System /dev/sda1 1 3907029167 1953514583+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdb: 8178 MB, 8178892800 bytes 255 heads, 63 sectors/track, 994 cylinders, total 15974400 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006a87e Device Boot Start End Blocks Id System /dev/sdb1 * 2048 15972351 7985152 b W95 FAT32 ubuntu@ubuntu:~$ Can anybody shed some light? thank you in advance EDIT Hey, i just did another trial with 13.04 this time and still no luck. bios: secure-boot: disabled enable CSM : always 1-delete previous ubuntu partition and swap area partition.now having free space 2- used usb installer to prepare usb with ubuntu-13.04-desktop-amd64.iso 3- rebooted : liveusb didnt detect windows 8, used something else 4-created partition ext4 for / 5-created partition for swap-area 6- default grub path is /dev/sda and clicked install Acer always boots into windows.

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >