Search Results

Search found 975 results on 39 pages for 'grant fritchey'.

Page 4/39 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Utility to grant admin rights to a user in Windows XP for few hours/days?

    - by user15660
    I have two accounts on my windows xp home desktop. The default regular user is used for everything and the 2nd user which has admin rights is used only for installations. I do this to avoid malware infestations during web browsing and limited user account is guarding against online threats to a good extend but many programs refuse to run under limited rights like revo uninstaller. many installs i run from limited user by selectin "run as" from right click context menu of the .exe file. but some apps need admin rights for certain. I use "switch user" to go to admin mode and do the install/uninstall. but the admin user has none of my preferences bookmarks setup nor has my locate32 indexing done and ready for fast search Is there a utility which I can use "run as" login in administration login and use that to grant my limited user admin rights for a limited amount of period like few hours or days? Please help. I guess MS might have closed many doors of it for fear of exploitation of the API. are there any?

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • How to Grant IIS 7.5 access to a certificate in certificate store?

    - by thames
    In Windows 2003 it was simple to do and one could use the winhttpcertcfg.exe (download) to give "NETWORK SERVICE" account access to a certificate. I'm now using Windows Server 2008 R2 with IIS 7.5 and I am unable to find where and how to set permissions access permissions to a certificate in the certificate store. This Post showed how to do it in Vista and that winhttpcertcfg features were added into the certificates mmc however it doesn't seem to work with imported certificates or doesn't work anymore on Server 2008 R2. So does anyone have any idea on how give IIS 7.5 the correct permissions to read a certificate from the certificate store? And also what account from IIS 7.5 that needs the permission.

    Read the article

  • In Windows 7 Home Premium, is it possible to grant a user account the "log on as a service" right and if so, how?

    - by Ryan Johnson
    The title says it all. I need to have the ability for a local user account to log on as a service on a computer running Windows 7 Home Premium. In Windows 7 Ultimate, this is accomplished by going to Control Panel - Administrative Tools - Local Security Policy and adding the user to the "Log on as a service" policy. In Home Premium, there is no Local Security Policy in the Control Panel. Is there another way to add the use to that policy (i.e. registry setting) or is my only recourse to upgrade the computer to Windows 7 Professional? Thanks in Advance, Ryan

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • How to change mount to grant user write permissions?

    - by nals
    I am on TomatoUSB, and using the feature to have a NAS. The only way I can write to the Samba share is if I force root: [global] interfaces = 127.0.0.1, 192.168.1.1/24 bind interfaces only = no workgroup = WORKGROUP netbios name = TOMATO security = share wins support = yes name resolve order = wins lmhosts hosts bcast guest account = nobody [Public] path = /mnt/sda2 read only = no public = yes only guest = yes guest ok = yes browseable = yes comment = Network share force user = root writeable = yes I dont really like the idea having to use root to allow write access to my share. I have a samba account created already named nobody to allow access to the share. However every time I try to write I get access denied error. fstab: /dev/sda2 /mnt/sda2 vfat defaults 0 0 Further more every time I try to chmod 777 /tmp/mnt/sda2 the permissions are not changed, and no error is produced. They stay 755. drwxr-xr-x 2 root root 4096 Jun 4 01:49 sda2 Basically; how can I give the user nobody write permissions to my mount? dev name: /dev/sda2 dev mount: /tmp/mnt/sda2

    Read the article

  • Resolving "PLS-00201: identifier 'DBMS_SYSTEM.XXXX' must be declared" Error

    - by Giri Mandalika
    Here is a failure sample. SQL set serveroutput on SQL alter package APPS.FND_TRACE compile body; Warning: Package Body altered with compilation errors. SQL show errors Errors for PACKAGE BODY APPS.FND_TRACE: LINE/COL ERROR -------- ----------------------------------------------------------------- 235/6 PL/SQL: Statement ignored 235/6 PLS-00201: identifier 'DBMS_SYSTEM.SET_EV' must be declared .. By default, DBMS_SYSTEM package is accessible only from SYS schema. Also there is no public synonym created for this package. So, the solution is to create the public synonym and grant "execute" privilege on DBMS_SYSTEM package to all database users or a specific user. eg., SQL CREATE PUBLIC SYNONYM dbms_system FOR dbms_system; Synonym created. SQL GRANT EXECUTE ON dbms_system TO APPS; Grant succeeded. - OR - SQL GRANT EXECUTE ON dbms_system TO PUBLIC; Grant succeeded. SQL alter package APPS.FND_TRACE compile body; Package body altered. Note that merely granting execute privilege is not enough -- creating the public synonym is as important to resolve this issue.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • WCF Error when using “Match Data” function in MDS Excel AddIn

    - by Davide Mauri
    If you’re using MDS and DQS with the Excel Integration you may get an error when trying to use the “Match Data” feature that uses DQS in order to help to identify duplicate data in your data set. The error is quite obscure and you have to enable WCF error reporting in order to have the error details and you’ll discover that they are related to some missing permission in MDS and DQS_STAGING_DATA database. To fix the problem you just have to give the needed permession, as the following script does: use MDS go GRANT SELECT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT INSERT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT DELETE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT UPDATE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] USE [DQS_STAGING_DATA] GO ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_ddladmin] TO [VMSRV02\mdsweb] GO Where “VMSRV02\mdsweb” is the user you configured for MDS Service execution. If you don’t remember it, you can just check which account has been assigned to the IIS application pool that your MDS website is using:

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • Oracle logical standby fails with ORA-01919

    - by DCookie
    I have an Oracle logical standby database being managed via data guard. Just this morning the redo apply process began failing with an ORA-01919 error, indicating one of our application roles did not exist. However, I can see the role on both primary and standby databases. We also have a physical standby that has long since applied the redo where this is happening on the logical, without issue. I have opened an SR with Oracle. I was wondering if anyone out there has seen this before. I guess I should mention: Oracle 10.2.0.4, Win2003 Server SP2. UPDATE: So far, Oracle Support has not provided an answer. I thought I'd post here what I have learned so far. It appears that a grant of DBA on the primary host to a role works fine for users granted the role. It does not work on the logical standby. IOW: create role TEST; grant dba to TEST; grant TEST to auser; connect auser set role TEST; grant <existing role> to <existing user>; This works on the primary instance but fails on the logical. A workaround appears to be to grant each role on the primary to the role TEST with admin option in the logical: grant <existing role> to TEST with admin option; <== do this on the logical standby Then the command works on the logical standby.

    Read the article

  • PowerShell & SQL Compare

    - by Grant Fritchey
    Just a quick blog post to share a couple of scripts for using PowerShell to call SQL Compare. This is an example from my session at SQL in the City on setting up a sandbox development process. This just runs a compare between a set of scripts and a database and deploys it. set-Location “c:\Program Files (x86)\Red Gate\SQL Compare 10\”; ./sqlcompare /s2:DOJO /db2:MovieManagement_Sandbox /sourcecontrol1 /vu1:grant /vp1:12345 /r1:HEAD /sfx:scripts.xml /sync /mfx:migrations.xml /verbose; I would not recommend using the /verbose output for real automation, but I’m showing off how the tool works. This particular script does a compare straight from source control to a database on my server. You can use variables where I’ve hard coded. That’s it. Works great. Just wanted to share it out there. I have others that I’ll track down and put up here.  

    Read the article

  • Why do users have to enter a 7-digit twitter PIN to grant my application access?

    - by Tony
    I am implementing some ruby on rails code tweet stuff for my users. I am creating the proper oauth link...something like http://twitter.com/oauth/authorize?oauth_token=y2RkuftYAEkbEuIF7zKMuzWN30O2XxM8U9j0egtzKv But after my test account grants access to twitter, it pulls up a page saying "You've successfully granted access to . Simply return to and enter the following PIN to complete the process. 1234567" I have no idea where the user should enter this PIN and why they have to do that. I don't think this should be a necessary step. Twitter should be redirecting the user to the callback URL I provided in the application settings. Does anyone know why this is happening? UPDATE I found this article that states I need to send my users to this URL (note "authenticate" instead of "authorize"): http://twitter.com/oauth/authenticate?oauth_token=y2RkuftYAEkbEuIF7zKMuzWN30O2XxM8U9j0egtzKv I made the change but Twitter redirects the user to the authorize path after he clicks "Allow" which then gives him the 7 digit PIN again!

    Read the article

  • How to Grant IIS 7.5 access to a certificate in certificate store?

    - by thames
    In Windows 2003 it was simple to do and one could use the winhttpcertcfg.exe (download) to give "NETWORK SERVICE" account access to a certificate. I'm now using Windows Server 2008 R2 with IIS 7.5 and I am unable to find where and how to set permissions access permissions to a certificate in the certificate store. This Post showed how to do it in Vista and that winhttpcertcfg features were added into the certificates mmc however it doesn't seem to work with imported certificates or doesn't work anymore on Server 2008 R2. So does anyone have any idea on how give IIS 7.5 the correct permissions to read a certificate from the certificate store? And also what account from IIS 7.5 that needs the permission.

    Read the article

  • How do I grant anonymous access to a url using FormsAuthentication?

    - by Brian Bolton
    For the most part, my webapp requires authentication to do anything. There are a few pages, namely the homepage, that I'd like people to be able to access without authenticating. Specifically, I'd like to allow anonymous access to these urls: /home /default.aspx I'm using asp.net MVC and FormsAuthentication. Both urls point to the same view: /home/index.aspx Here is my current configuration in web.config. <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> <authorization> <deny users="?" /> </authorization> Reading the documentation for the authorization tag, it says "Configures the authorization for a Web application, controlling client access to URL resources." It seems like I should be able to use the authorization tag to specify a url and allow access. Something like: <authentication mode="Forms"> <forms loginUrl="~/Account/LogOn" timeout="2880" /> </authentication> <authorization> <deny users="?" /> </authorization> <authorization url="/default.aspx"> <allow users="?" /> </authorization> <authorization url="/home"> <allow users="?" /> </authorization>

    Read the article

  • What permissions do I need to grant to run RavenDB in Server mode?

    - by dalesmithtx
    I'm reading through Rob Ashton's excellent blog post on RavenDB: http://codeofrob.com/archive/2010/05/09/ravendb-an-introduction.aspx and I'm working through the code as I read. But when I try to add an index, I get a 401 error. Here's the code: class Program { static void Main(string[] args) { using (var documentStore = new DocumentStore() { Url = "http://localhost:8080" }) { documentStore.Initialise(); documentStore.DatabaseCommands.PutIndex( "BasicEntityBySomeData", new IndexDefinition<BasicEntity, BasicEntity>() { Map = docs => from doc in docs where doc.SomeData != null select new { SomeData = doc.SomeData }, }); string entityId; using (var documentSession = documentStore.OpenSession()) { var entity = new BasicEntity() { SomeData = "Hello, World!", SomeOtherData = "This is just another property", }; documentSession.Store(entity); documentSession.SaveChanges(); entityId = entity.Id; var loadedEntity = documentSession.Load<BasicEntity>(entityId); Console.WriteLine(loadedEntity.SomeData); var docs = documentSession.Query<BasicEntity>("BasicEntityBySomeData") .Where("SomeData:Hello~") .WaitForNonStaleResults() .ToArray(); docs.ToList().ForEach(doc => Console.WriteLine(doc.SomeData)); Console.Read(); } } } It throws the 401 error when on the line that makes the PutIndex() call. Any ideas what permissions I need to apply? And where I need to apply them?

    Read the article

  • ASP.NET - I am generating an .XLS file with a DLL, how do I grant permissions for writing to file? (

    - by hamlin11
    I'm generating an .XLS file with a DLL (Excel Library http://code.google.com/p/excellibrary/) I've added this DLL as a reference to my project. The code to save the .XLS to disk is running, but it's encountering a permissions issue. I've attempted to set full access for IUSRS, Network Service, and Everyone just to see if I could get it working, and none of these seems to make a difference. Here's where I'm trying to write the file: c:/temp/test1.xls Here's the error: [SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) +2103 System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) +138 System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) +89 System.IO.File.Open(String path, FileMode mode, FileAccess access, FileShare share) +58 ExcelLibrary.Office.CompoundDocumentFormat.CompoundDocument.Create(String file) +88 ExcelLibrary.Office.Excel.Workbook.Save(String file) +73 CHC_Reports.LitAnalysis.CreateSpreadSheet_Click(Object sender, EventArgs e) in C:\Users\brian\Desktop\Enterprise Manager\CHC_Reports\LitAnalysis.aspx.vb:19 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11041511 System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11041050 System.Web.UI.Page.ProcessRequest() +91 System.Web.UI.Page.ProcessRequest(HttpContext context) +240 ASP.litanalysis_aspx.ProcessRequest(HttpContext context) +52 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +599 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +171 Any idea what I need to do to diagnose the permissions issue and allow the file creation? Thanks.

    Read the article

  • How much user data should be required to grant a password reset?

    - by Andrew Heath
    I'm looking to add password-reset functionality to my site and have been browsing the numerous threads discussing various aspects of that issue here on SO. One thing I haven't really seen clarified is how much information to require from the user for confirmation before sending out the reset email. is email alone enough? email + account username? email + account username + some other identifying value all accounts must input? I don't want my site to seem like an old wrinkly nun with a ruler, but I don't want people to be able to abuse the password reset system willy-nilly. Suggestions?

    Read the article

  • SecurityException from Activator.CreateInstance(), How to grant permissons to Assembly?

    - by user365164
    I have been loading an assembly via Assembly.LoadFrom(@"path"); and then doing Type t = asm.GetType("Test.Test"); test = Activator.CreateInstance(t, new Object[] { ... }); and it was working fine, but now I moved the dll I am getting the following System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.Security.SecurityException: Request for the permission of type 'System.Security.Permissons.SecurityPermission, etc .. For the sake of brevity it seems the demand was for an PermissionSet that allowed ControlAppDomain and it's not getting it. My question is how can I create this permissionset and pass it to the instance or assembly? I've been googling for hours to no avail.

    Read the article

  • Can I grant explicit Javascript methods to a different-host iframe?

    - by Matchu
    I'm thinking about a system in which I allow users to create Javascript-empowered widgets for other users to embed in their dashboard on my website. I'd like to limit these widgets fairly strictly, so each would exist as an iframe kept on its own unique hostname: the widget with ID #47 would be accessible at w47.widgets.example.com, for example. It would be helpful, for permission-granting dialogs and the like, to allow the widget to call very specific methods explicitly granted by the parent window, without authorizing the iframe to do whatever it likes with the parent frame on the user's behalf. Is it possible for a parent document to explicitly allow certain method calls to a child document on a different host?

    Read the article

  • Partitioned Repository for WebCenter Content using Oracle Database 11g

    - by Adao Junior
    One of the biggest challenges for content management solutions is related to the storage management due the high volumes of the unstoppable growing of information. Even if you have storage appliances and a lot of terabytes, thinks like backup, compression, deduplication, storage relocation, encryption, availability could be a nightmare. One standard option that you have with the Oracle WebCenter Content is to store data to the database. And the Oracle Database allows you leverage features like compression, deduplication, encryption and seamless backup. But with a huge volume, the challenge is passed to the DBA to keep the WebCenter Content Database up and running. One solution is the use of DB partitions for your content repository, but what are the implications of this? Can I fit this with my business requirements? Well, yes. It’s up to you how you will manage that, you just need a good plan. During you “storage brainstorm plan” take in your mind what you need, such as storage petabytes of documents? You need everything on-line? There’s a way to logically separate the “good content” from the “legacy content”? The first thing that comes to my mind is to use the creation date of the document, but you need to remember that this document could receive a lot of revisions and maybe you can consider the revision creation date. Your plan can have also complex rules like per Document Type or per a custom metadata like department or an hybrid per date, per DocType and an specific virtual folder. Extrapolation the use, you can have your repository distributed in different servers, different disks, different disk types (Such as ssds, sas, sata, tape,…), separated accordingly your business requirements, separating the “hot” content from the legacy and easily matching your compliance requirements. If you think to use by revision, the simple way is to consider the dId, that is the sequential unique id for every content created using the WebCenter Content or the dLastModified that is the date field of the FileStorage table that contains the date of inclusion of the content to the DB Table using SecureFiles. Using the scenario of partitioned repository using an hierarchical separation by date, we will transform the FileStorage table in an partitioned table using  “Partition by Range” of the dLastModified column (You can use the dId or a join with other tables for other metadata such as dDocType, Security, etc…). The test scenario bellow covers: Previous existent data on the JDBC Storage to be migrated to the new partitioned JDBC Storage Partition by Date Automatically generation of new partitions based on a pre-defined interval (Available only with Oracle Database 11g+) Deduplication and Compression for legacy data Oracle WebCenter Content 11g PS5 (Could present some customizations that do not affect the test scenario) For the test case you need some data stored using JDBC Storage to be the “legacy” data. If you do not have done before, just create an Storage rule pointed to the JDBC Storage: Enable the metadata StorageRule in the UI and upload some documents using this rule. For this test case you can run using the schema owner or an dba user. We will use the schema owner TESTS_OCS. I can’t forgot to tell that this is just a test and you should do a proper backup of your environment. When you use the schema owner, you need some privileges, using the dba user grant the privileges needed: REM Grant privileges required for online redefinition. GRANT EXECUTE ON DBMS_REDEFINITION TO TESTS_OCS; GRANT ALTER ANY TABLE TO TESTS_OCS; GRANT DROP ANY TABLE TO TESTS_OCS; GRANT LOCK ANY TABLE TO TESTS_OCS; GRANT CREATE ANY TABLE TO TESTS_OCS; GRANT SELECT ANY TABLE TO TESTS_OCS; REM Privileges required to perform cloning of dependent objects. GRANT CREATE ANY TRIGGER TO TESTS_OCS; GRANT CREATE ANY INDEX TO TESTS_OCS; In our test scenario we will separate the content as Legacy, Day1, Day2, Day3 and Future. This last one will partitioned automatically using 3 tablespaces in a round robin mode. In a real scenario the partition rule could be per month, per year or any rule that you choose. Table spaces for the test scenario: CREATE TABLESPACE TESTS_OCS_PART_LEGACY DATAFILE 'tests_ocs_part_legacy.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY1 DATAFILE 'tests_ocs_part_day1.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY2 DATAFILE 'tests_ocs_part_day2.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_DAY3 DATAFILE 'tests_ocs_part_day3.dat' SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_A 'tests_ocs_part_round_robin_a.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_B 'tests_ocs_part_round_robin_b.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; CREATE TABLESPACE TESTS_OCS_PART_ROUND_ROBIN_C 'tests_ocs_part_round_robin_c.dat' DATAFILE SIZE 500K AUTOEXTEND ON NEXT 500K MAXSIZE UNLIMITED; Before start, gather optimizer statistics on the actual FileStorage table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage', cascade => TRUE); Now check if is possible execute the redefinition process: EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE('TESTS_OCS', 'FileStorage',DBMS_REDEFINITION.CONS_USE_PK); If no errors messages, you are good to go. Create a Partitioned Interim FileStorage table. You need to create a new table with the partition information to act as an interim table: CREATE TABLE FILESTORAGE_Part ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY RANGE (DLASTMODIFIED) INTERVAL (NUMTODSINTERVAL(1,'DAY')) STORE IN (TESTS_OCS_PART_ROUND_ROBIN_A, TESTS_OCS_PART_ROUND_ROBIN_B, TESTS_OCS_PART_ROUND_ROBIN_C) ( PARTITION FILESTORAGE_PART_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_LEGACY LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_LEGACY RETENTION NONE DEDUPLICATE COMPRESS HIGH ), PARTITION FILESTORAGE_PART_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY1 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY1 RETENTION AUTO KEEP_DUPLICATES COMPRESS ), PARTITION FILESTORAGE_PART_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY2 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY2 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ), PARTITION FILESTORAGE_PART_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) TABLESPACE TESTS_OCS_PART_DAY3 LOB (BFILEDATA) STORE AS SECUREFILE ( TABLESPACE TESTS_OCS_PART_DAY3 RETENTION AUTO KEEP_DUPLICATES NOCOMPRESS ) ); After the creation you should see your partitions defined. Note that only the fixed range partitions have been created, none of the interval partition have been created. Start the redefinition process: BEGIN DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,col_mapping => NULL ,options_flag => DBMS_REDEFINITION.CONS_USE_PK ); END; This operation can take some time to complete, depending how many contents that you have and on the size of the table. Using the DBA user you can check the progress with this command: SELECT * FROM v$sesstat WHERE sid = 1; Copy dependent objects: DECLARE redefinition_errors PLS_INTEGER := 0; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( uname => 'TESTS_OCS' ,orig_table => 'FileStorage' ,int_table => 'FileStorage_PART' ,copy_indexes => DBMS_REDEFINITION.CONS_ORIG_PARAMS ,copy_triggers => TRUE ,copy_constraints => TRUE ,copy_privileges => TRUE ,ignore_errors => TRUE ,num_errors => redefinition_errors ,copy_statistics => FALSE ,copy_mvlog => FALSE ); IF (redefinition_errors > 0) THEN DBMS_OUTPUT.PUT_LINE('>>> FileStorage to FileStorage_PART temp copy Errors: ' || TO_CHAR(redefinition_errors)); END IF; END; With the DBA user, verify that there's no errors: SELECT object_name, base_table_name, ddl_txt FROM DBA_REDEFINITION_ERRORS; *Note that will show 2 lines related to the constrains, this is expected. Synchronize the interim table FileStorage_PART: BEGIN DBMS_REDEFINITION.SYNC_INTERIM_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; Gather statistics on the new table: EXEC DBMS_STATS.GATHER_TABLE_STATS(USER, 'FileStorage_PART', cascade => TRUE); Complete the redefinition: BEGIN DBMS_REDEFINITION.FINISH_REDEF_TABLE( uname => 'TESTS_OCS', orig_table => 'FileStorage', int_table => 'FileStorage_PART'); END; During the execution the FileStorage table is locked in exclusive mode until finish the operation. After the last command the FileStorage table is partitioned. If you have contents out of the range partition, you should see the new partitions created automatically, not generating an error if you “forgot” to create all the future ranges. You will see something like: You now can drop the FileStorage_PART table: border-bottom-width: 1px; border-bottom-style: solid; text-align: left; border-left-color: silver; border-left-width: 1px; border-left-style: solid; padding-bottom: 4px; line-height: 12pt; background-color: #f4f4f4; margin-top: 20px; margin-right: 0px; margin-bottom: 10px; margin-left: 0px; padding-left: 4px; width: 97.5%; padding-right: 4px; font-family: 'Courier New', Courier, monospace; direction: ltr; max-height: 200px; font-size: 8pt; overflow-x: auto; overflow-y: auto; border-top-color: silver; border-top-width: 1px; border-top-style: solid; cursor: text; border-right-color: silver; border-right-width: 1px; border-right-style: solid; padding-top: 4px; " id="codeSnippetWrapper"> DROP TABLE FileStorage_PART PURGE; To check the FileStorage table is valid and is partitioned, use the command: SELECT num_rows,partitioned FROM user_tables WHERE table_name = 'FILESTORAGE'; You can list the contents of the FileStorage table in a specific partition, per example: SELECT * FROM FileStorage PARTITION (FILESTORAGE_PART_LEGACY) Some useful commands that you can use to check the partitions, note that you need to run using a DBA user: SELECT * FROM DBA_TAB_PARTITIONS WHERE table_name = 'FILESTORAGE';   SELECT * FROM DBA_TABLESPACES WHERE tablespace_name like 'TESTS_OCS%'; After the redefinition process complete you have a new FileStorage table storing all content that has the Storage rule pointed to the JDBC Storage and partitioned using the rule set during the creation of the temporary interim FileStorage_PART table. At this point you can test the WebCenter Content downloading the documents (Original and Renditions). Note that the content could be already in the cache area, take a look in the weblayout directory to see if a file with the same id is there, then click on the web rendition of your test file and see if have created the file and you can open, this means that is all working. The redefinition process can be repeated many times, this allow you test what the better layout, over and over again. Now some interesting maintenance actions related to the partitions: Make an tablespace read only. No issues viewing, the WebCenter Content do not alter the revisions When try to delete an content that is part of an read only tablespace, an error will occurs and the document will not be deleted The only way to prevent errors today is creating an custom component that checks the partitions and if you have an document in an “Read Only” repository, execute the deletion process of the metadata and mark the document to be deleted on the next db maintenance, like a new redefinition. Take an tablespace off-line for archiving purposes or any other reason. When you try open an document that is included in this tablespace will receive an error that was unable to retrieve the content, but the others online tablespaces are not affected. Same behavior when deleting documents. Again, an custom component is the solution. If you have an document “out of range”, the component can show an message that the repository for that document is offline. This can be extended to a option to the user to request to put online again. Moving some legacy content to an offline repository (table) using the Exchange option to move the content from one partition to a empty nonpartitioned table like FileStorage_LEGACY. Note that this option will remove the registers from the FileStorage and will not be able to open the stored content. You always need to keep in mind the indexes and constrains. An redefinition separating the original content (vault) from the renditions and separate by date ate the same time. This could be an option for DAM environments that want to have an special place for the renditions and put the original files in a storage with less performance. The process will be the same, you just need to change the script of the interim table to use composite partitioning. Will be something like: CREATE TABLE FILESTORAGE_RenditionPart ( DID NUMBER(*,0) NOT NULL ENABLE, DRENDITIONID VARCHAR2(30 CHAR) NOT NULL ENABLE, DLASTMODIFIED TIMESTAMP (6), DFILESIZE NUMBER(*,0), DISDELETED VARCHAR2(1 CHAR), BFILEDATA BLOB ) LOB (BFILEDATA) STORE AS SECUREFILE ( ENABLE STORAGE IN ROW NOCACHE LOGGING KEEP_DUPLICATES NOCOMPRESS ) PARTITION BY LIST (DRENDITIONID) SUBPARTITION BY RANGE (DLASTMODIFIED) ( PARTITION Vault VALUES ('primaryFile') ( SUBPARTITION FILESTORAGE_VAULT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_VAULT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION WebLayout VALUES ('webViewableFile') ( SUBPARTITION FILESTORAGE_WEBLAYOUT_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_WEBLAYOUT_FUTURE VALUES LESS THAN (MAXVALUE) ) ,PARTITION Special VALUES ('Special') ( SUBPARTITION FILESTORAGE_SPECIAL_LEGACY VALUES LESS THAN (TO_DATE('05-APR-2012 12.00.00 AM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY1 VALUES LESS THAN (TO_DATE('06-APR-2012 07.25.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY2 VALUES LESS THAN (TO_DATE('06-APR-2012 07.55.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_DAY3 VALUES LESS THAN (TO_DATE('06-APR-2012 07.58.00 PM', 'DD-MON-YYYY HH.MI.SS AM')) LOB (BFILEDATA) STORE AS SECUREFILE , SUBPARTITION FILESTORAGE_SPECIAL_FUTURE VALUES LESS THAN (MAXVALUE) ) )ENABLE ROW MOVEMENT; The next post related to partitioned repository will come with an sample component to handle the possible exceptions when you need to take off line an tablespace/partition or move to another place. Also, we can include some integration to the Retention Management and Records Management. Another subject related to partitioning is the ability to create an FileStore Provider pointed to a different database, raising the level of the distributed storage vs. performance. Let us know if this is important to you or you have an use case not listed, leave a comment. Cross-posted on the blog.ContentrA.com

    Read the article

  • Git-windows case sensitive file names not handled properly

    - by dhanasekar79
    We have the git bare repository in unix that has files with same name that differs only in cases. Example: GRANT.sql grant.sql When we clone the bare repository from unix in to a windows box, git status detecs the file as modified. The working tree is loaded only with grant.sql, but git status compares grant.sql and GRANT.sql and shows the file as modified in the working tree. I tried using the core.ignorecase false but the result is the same. Is there any way to rix this issue.

    Read the article

  • How do I grant a database role execute permissions on a schema? What am I doing wrong?

    - by Lewray
    I am using SQL Server 2008 Express edition. I have created a Login , User, Role and Schema. I have mapped the user to the login, and assigned the role to the user. The schema contains a number of tables and stored procedures. I would like the Role to have execute permissions on the entire schema. I have tried granting execute permission through management studio and through entering the command in a query window. GRANT EXEC ON SCHEMA::schema_name TO role_name But When I connect to the database using SQL management studio (as the login I have created) firstly I cannot see the stored procedures, but more importantly I get a permission denied error when attempting to run them. The stored procedure in question does nothing except select data from a table within the same schema. I have tried creating the stored procedure with and without the line: WITH EXECUTE AS OWNER This doesn't make any difference. I suspect that I have made an error when creating my schema, or there is an ownership issue somewhere, but I am really struggling to get something working. The only way I have successfully managed to execute the stored procedures is by granting control permissions to the role as well as execute, but I don't believe this is the correct, secure way to proceed. Any suggestions/comments would be really appreciated. Thanks.

    Read the article

  • BASH echo write mysql input

    - by jmituzas
    Have a bash menu where variables write to file for mysql input. heres what I have: echo "CREATE DATABASE '$mysqldbn'; #GRANT ALL PRIVILEGES ON *.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup' WITH GRANT OPTION; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myip' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'localhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES< LOCK TABLES on '$mysqldbn'.* TO '$mysqlu'@'$rip' IDENTIFIED BY '$mysqlup';" > nmysql.db mysql -u root -p$mypass < nmysql.db problem is to get variables to show I had to put them in single quotes, the single quotes show up as I want for instances like '$mysqlu'@'localhost'. But how can I remove the quotes and still get to use the variable in the instance like, CREATE DATABASE '$mysqldbn' ? Double quotes wont work either, I am at a loss. Thanks in advance, Joe

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >