Search Results

Search found 36369 results on 1455 pages for 'document write'.

Page 122/1455 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • "No XML content. Please add a root view or layout to your document."

    - by ez4nick
    I am trying to follow this tutorial : http://developer.android.com/training/basics/firstapp/building-ui.html as I am new to android developing and this is what my "activity_main.xml" file looks like : <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="horizontal"> <EditText android:id="@+id/edit_message" android:layout_weight="1" android:layout_width="0dp" android:layout_height="wrap_content" android:hint="@string/edit_message" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/button_send" /> When I click run I get an error that says "No XML content. Please add a root view or layout to your document." and I noticed there is a new file generated called "activity_main.out.xml". What could I be doing wrong?

    Read the article

  • How to use interop for reading word document and get page number?

    - by monkey_boys
    Microsoft.Office.Interop.Word.Application app = new Microsoft.Office.Interop.Word.Application(); object nullobj = System.Reflection.Missing.Value; object file = openFileDialog1.FileName; Microsoft.Office.Interop.Word.Document doc = app.Documents.Open( ref file, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj, ref nullobj); doc.ActiveWindow.Selection.WholeStory(); doc.ActiveWindow.Selection.Copy(); IDataObject data = Clipboard.GetDataObject(); string text = data.GetData(DataFormats.Text).ToString(); textBox2.Text = text; doc.Close(ref nullobj, ref nullobj, ref nullobj); app.Quit(ref nullobj, ref nullobj, ref nullobj); But not have page number how to ?

    Read the article

  • "No XML content. Please add a root view or layout to your document"

    - by evc
    I am trying to code a softkeyboard for 2.1 and up when I code ( see code below) in the main.xml graphical view is displays nothing and says No XML content. Please add a root view or layout to your document" I have tried to place the code in textview but still no luck I can not get the softkeyboard to show at all, its as if my code is being ignored..I have tried these two codes separately nothing works <com.example.android.softkeyboard.LatinKeyboardView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/keyboard" android:layout_alignParentBottom="true" android:layout_width="match_parent" android:layout_height="wrap_content" /> <Keyboard android:keyWidth="%10p" android:keyHeight="50px" android:horizontalGap="2px" android:verticalGap="2px" > <Row android:keyWidth="32px" > <Key android:keyLabel="A" /> ... </Row> ... </Keyboard>

    Read the article

  • How to assign the value of document.cookie to your browser cookies?

    - by Ricket
    I'm a developer (and therefore a tester) of a website. Our site accepts any JavaScript or HTML from an user but I haven't been successful in explaining the danger of it, as obvious as it is. So I would like to prove it by logging in as my boss to prove to him that there is definitely a real danger here. I think this will put down any of his arguments and let us move onto filtering content like this. (note this question is not about filtering, or other suggestions on JavaScript tricks) I already know how to steal the value of the document.cookie variable with AJAX and a PHP file, but once you have that string of name=value;name=value;..., how do you apply it to your own browser? This is programming related because I am asking about tools which will help me debug my web program.

    Read the article

  • Why does document.QuerySelectorAll return a StaticNodeList rather than a real Array?

    - by Kev
    It bugs me that I can't just do document.QuerySelectorAll(...).map(...) even in Firefox 3.6, and I still can't find an answer, so I thought I'd cross-post on SO the question from this blog: http://blowery.org/2008/08/29/yay-for-queryselectorall-boo-for-staticnodelist/ Does anyone know of a technical reason why you don't get an Array? Or why an SNL doesn't inherit from an Array in such a way that you could use map, concat, etc? (BTW if it's just one function you want, you can do something like NodeList.prototype.map = Array.prototype.map;...but again, why is this functionality (intentionally?) blocked in the first place?)

    Read the article

  • Is there a reference document for OOTB SharePoint web part interfaces?

    - by Javaman59
    I want my web parts to act as providers and consumers with OOTB SharePoint web parts, including filters. To design these web parts I need to know the interface of the OOTB web parts, eg. a "Links" web part provides an IWebPartRow, with a Type, URL and Notes. I can get the information by: a web part programmed to interact generically, and display the data it receives putting the web parts on a page, and look at the available connections inspecting the DLL's (haven't tried this, because I can't find the DLL's, but I guess it's possible) It seems strange to me that there is no documentation of these interfaces. Is there a Microsoft document, or a book, which documents the OOTB web parts and filters?

    Read the article

  • converting tabular structures in a Word document into an actual table or reading data from the tabul

    - by Chris
    I have a macro which can read the last cell/column of all tables in a Word 2003/2007 document and store the data in an MS-Access table. But, some Word documents have the data in structures like a table format but are not actually tables. The structure looks like a table, but the table borders are actually line connectors. These documents were created by a software(VeryPDF PDF to Word converter) which converted the PDF documents(the original format these documents were) into Word documents. Is there a way I can convert/replace the tabular structures with actual tables in Word so that I can use the macro? Or, is there a way I can read the value of the last column from the tabular structures using some VBA code? Any advice would be appreciated.

    Read the article

  • how do i represent document.getElementById('page1:form2:amount') in jquery?

    - by Prady
    Hi, how can i represent document.getElementById('page1:form2:amount') in jQuery I know i can use $('#amount') to get access the id amount. The id amount is a "< apex:inputText/ " type which is how an input text is represented in visulforce page. To access this id i would need to use the hierarchy of page-form-id. Thanks Prady Update: Code used in visulaforce page. <apex:page controller="acontroller" id="page1"> <apex:form id="form2"> <apex:inputText value="{!amt}" id="amount" onchange="calenddate();"/> </apex:form> </apex:page>

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata New-SPEnterpriseSearchMetadataCrawledProperty New-SPEnterpriseSearchMetadataManagedProperty Remove-SPEnterpriseSearchMetadataManagedProperty Overview of crawled and managed properties in SharePoint 2013 Preview Remove-SPEnterpriseSearchMetadataManagedProperty SharePoint 2013 – Search Service Application

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host

    - by xnoor
    i have an update server that sends client updates through TCP port 12000, the sending of a single file is successful only the first time, but after that i get an error message on the server "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host", if i restart the update service on the server, it works again only one, i have normal multithreaded windows service SERVER CODE namespace WSTSAU { public partial class ApplicationUpdater : ServiceBase { private Logger logger = LogManager.GetCurrentClassLogger(); private int _listeningPort; private int _ApplicationReceivingPort; private string _setupFilename; private string _startupPath; public ApplicationUpdater() { InitializeComponent(); } protected override void OnStart(string[] args) { init(); logger.Info("after init"); Thread ListnerThread = new Thread(new ThreadStart(StartListener)); ListnerThread.IsBackground = true; ListnerThread.Start(); logger.Info("after thread start"); } private void init() { _listeningPort = Convert.ToInt16(ConfigurationSettings.AppSettings["ListeningPort"]); _setupFilename = ConfigurationSettings.AppSettings["SetupFilename"]; _startupPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); } private void StartListener() { try { logger.Info("Listening Started"); ThreadPool.SetMinThreads(50, 50); TcpListener listener = new TcpListener(_listeningPort); listener.Start(); while (true) { TcpClient c = listener.AcceptTcpClient(); ThreadPool.QueueUserWorkItem(ProcessReceivedMessage, c); } } catch (Exception ex) { logger.Error(ex.Message); } } void ProcessReceivedMessage(object c) { try { TcpClient tcpClient = c as TcpClient; NetworkStream Networkstream = tcpClient.GetStream(); byte[] _data = new byte[1024]; int _bytesRead = 0; _bytesRead = Networkstream.Read(_data, 0, _data.Length); MessageContainer messageContainer = new MessageContainer(); messageContainer = SerializationManager.XmlFormatterByteArrayToObject(_data, messageContainer) as MessageContainer; switch (messageContainer.messageType) { case MessageType.ApplicationUpdateMessage: ApplicationUpdateMessage appUpdateMessage = new ApplicationUpdateMessage(); appUpdateMessage = SerializationManager.XmlFormatterByteArrayToObject(messageContainer.messageContnet, appUpdateMessage) as ApplicationUpdateMessage; Func<ApplicationUpdateMessage, bool> HandleUpdateRequestMethod = HandleUpdateRequest; IAsyncResult cookie = HandleUpdateRequestMethod.BeginInvoke(appUpdateMessage, null, null); bool WorkerThread = HandleUpdateRequestMethod.EndInvoke(cookie); break; } } catch (Exception ex) { logger.Error(ex.Message); } } private bool HandleUpdateRequest(ApplicationUpdateMessage appUpdateMessage) { try { TcpClient tcpClient = new TcpClient(); NetworkStream networkStream; FileStream fileStream = null; tcpClient.Connect(appUpdateMessage.receiverIpAddress, appUpdateMessage.receiverPortNumber); networkStream = tcpClient.GetStream(); fileStream = new FileStream(_startupPath + "\\" + _setupFilename, FileMode.Open, FileAccess.Read); FileInfo fi = new FileInfo(_startupPath + "\\" + _setupFilename); BinaryReader binFile = new BinaryReader(fileStream); FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage.fileName = fi.Name; fileUpdateMessage.fileSize = fi.Length; MessageContainer messageContainer = new MessageContainer(); messageContainer.messageType = MessageType.FileProperties; messageContainer.messageContnet = SerializationManager.XmlFormatterObjectToByteArray(fileUpdateMessage); byte[] messageByte = SerializationManager.XmlFormatterObjectToByteArray(messageContainer); networkStream.Write(messageByte, 0, messageByte.Length); int bytesSize = 0; byte[] downBuffer = new byte[2048]; while ((bytesSize = fileStream.Read(downBuffer, 0, downBuffer.Length)) > 0) { networkStream.Write(downBuffer, 0, bytesSize); } fileStream.Close(); tcpClient.Close(); networkStream.Close(); return true; } catch (Exception ex) { logger.Info(ex.Message); return false; } finally { } } protected override void OnStop() { } } i have to note something that my windows service (server) is multithreaded.. i hope anyone can help with this

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Is it possible to write C# code as below and send email using my home network?

    - by kedar karthik
    Is it possible to write C# code as below and send email using my home network? I have a valid user name and password on that exchange server. Is there any configuration that I can set to achieve this? BTW this code blow works when I run it within office network. I want this code to work when run from any network. String cMSExchangeWebServiceURL = (String)System.Configuration.ConfigurationSettings.AppSettings["MSExchangeWebServiceURL"]; String cEmail = (String)System.Configuration.ConfigurationSettings.AppSettings["Cemail"]; String cPassword = (String)System.Configuration.ConfigurationSettings.AppSettings["Cpassword"]; String cTo = (String)System.Configuration.ConfigurationSettings.AppSettings["CTo"]; ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Timeout = 1800000; esb.AllowAutoRedirect = true; esb.UseDefaultCredentials = false; esb.Credentials = new NetworkCredential(cEmail, cPassword); esb.Url = cMSExchangeWebServiceURL; ServicePointManager.ServerCertificateValidationCallback += delegate(object sender1, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; }; // Create a CreateItem request object CreateItemType request = new CreateItemType(); // Setup the request: // Indicate that we only want to send the message. No copy will be saved. request.MessageDisposition = MessageDispositionType.SendOnly; request.MessageDispositionSpecified = true; // Create a message object and set its properties MessageType message = new MessageType(); message.Subject = subject; message.Body = new TestOutgoingEmailServer.com.cogniti.mail1.BodyType(); message.Body.BodyType1 = BodyTypeType.HTML; message.Body.Value = body; message.ToRecipients = new EmailAddressType[3]; message.ToRecipients[0] = new EmailAddressType(); //message.ToRecipients[1] = new EmailAddressType(); //message.ToRecipients[2] = new EmailAddressType(); message.ToRecipients[0].EmailAddress = "[email protected]"; message.ToRecipients[0].RoutingType = "SMTP"; //message.CcRecipients = new EmailAddressType[1]; //message.CcRecipients[0] = new EmailAddressType(); //message.CcRecipients[0].EmailAddress = toEmailAddress.ElementAt(1).ToString(); //message.CcRecipients[0].RoutingType = "SMTP"; //There are some more properties in MessageType object //you can set all according to your requirement // Construct the array of items to send request.Items = new NonEmptyArrayOfAllItemsType(); request.Items.Items = new ItemType[1]; request.Items.Items[0] = message; // Call the CreateItem EWS method. CreateItemResponseType response = esb.CreateItem(request);

    Read the article

  • How to add a blank page to a pdf using iTextSharp?

    - by Russell
    I am trying to do something I thought would be quite simple, however it is not so straight forward and google has not helped. I am using iTextSharp to merge PDF documents (letters) together so they can all be printed at once. If a letter has an odd number of pages I need to append a blank page, so we can print the letters double-sided. Here is the basic code I have at the moment for merging all of the letters: // initiaise MemoryStream pdfStreamOut = new MemoryStream(); Document document = null; MemoryStream pdfStreamIn = null; PdfReader reader = null; int numPages = 0; PdfWriter writer = null; for int(i = 0;i < letterList.Count; i++) { byte[] myLetterData = ...; pdfStreamIn = new MemoryStream(myLetterData); reader = new PdfReader(pdfStreamIn); numPages = reader.NumberOfPages; // open the streams to use for the iteration if (i == 0) { document = new Document(reader.GetPageSizeWithRotation(1)); writer = PdfWriter.GetInstance(document, pdfStreamOut); document.Open(); } PdfContentByte cb = writer.DirectContent; PdfImportedPage page; int importedPageNumber = 0; while (importedPageNumber < numPages) { importedPageNumber++; document.SetPageSize(reader.GetPageSizeWithRotation(importedPageNumber)); document.NewPage(); page = writer.GetImportedPage(reader, importedPageNumber); cb.AddTemplate(page, 1f, 0, 0, 1f, 0, 0); } } I have tried using: document.SetPageSize(reader.GetPageSizeWithRotation(1)); document.NewPage(); at the end of the for loop for an odd number of pages without success. Any help would be much appreciated!

    Read the article

  • iTextSharp: How to position and wrap long text?

    - by aximili
    The PDF I can produce at the moment: I want the text to fill up the space in the lower left. How can I do that? Thanks! This is my code: private static void CreatePdf4(string pdfFilename, string heading, string text, string[] photos, string emoticon) { Document document = new Document(PageSize.A4.Rotate(), 26, 36, 0, 0); PdfWriter writer = PdfWriter.GetInstance(document, new FileStream(pdfFilename, FileMode.Create)); document.Open(); // Heading Paragraph pHeading = new Paragraph(new Chunk(heading, FontFactory.GetFont(FontFactory.HELVETICA, 54, Font.NORMAL))); document.Add(pHeading); // Photo 1 Image img1 = Image.GetInstance(HttpContext.Current.Server.MapPath("/uploads/photos/" + photos[0])); img1.ScaleAbsolute(350, 261); img1.SetAbsolutePosition(46, 220); img1.Alignment = Image.TEXTWRAP; document.Add(img1); // Photo 2 Image img2 = Image.GetInstance(HttpContext.Current.Server.MapPath("/uploads/photos/" + photos[1])); img2.ScaleAbsolute(350, 261); img2.SetAbsolutePosition(438, 220); img2.Alignment = Image.TEXTWRAP; document.Add(img2); // Text //Paragraph pText = new Paragraph(new Chunk(text, FontFactory.GetFont(FontFactory.HELVETICA, 18, Font.NORMAL))); //pText.SpacingBefore = 30; //pText.IndentationLeft = 20; //pText.IndentationRight = 366; //document.Add(pText); PdfContentByte cb = writer.DirectContent; cb.BeginText(); cb.SetFontAndSize(BaseFont.CreateFont(BaseFont.HELVETICA, BaseFont.CP1252, false), 18); cb.SetTextMatrix(46, 175); cb.ShowText(text); cb.EndText(); // Photo 3 Image img3 = Image.GetInstance(HttpContext.Current.Server.MapPath("/uploads/photos/" + photos[2])); img3.ScaleAbsolute(113, 153); img3.SetAbsolutePosition(556, 38); document.Add(img3); // Emoticon Image imgEmo = Image.GetInstance(HttpContext.Current.Server.MapPath("/Content/images/" + emoticon)); imgEmo.ScaleToFit(80, 80); imgEmo.SetAbsolutePosition(692, 70); document.Add(imgEmo); document.Close(); }

    Read the article

  • How to clean-up an Entity Framework object context?

    - by Daniel Brückner
    I am adding several entities to an object context. try { forach (var document in documents) { this.Validate(document); // May throw a ValidationException. this.objectContext.AddToDocuments(document); } this.objectContext.SaveChanges(); } catch { // How to clean-up the object context here? throw; } If some of the documents pass the the validation and one fails, all documents that passed the validation remain added to the object context. I have to clean-up the object context because it may be reused and the following can happen. var documentA = new Document { Id = 1, Data = "ValidData" }; var documentB = new Document { Id = 2, Data = "InvalidData" }; var documentC = new Document { Id = 3, Data = "ValidData" }; try { // Adding document B will cause a ValidationException but only // after document A is added to the object context. this.DocumentStore.AddDocuments(new[] { documentA, documentB, documentC }); } catch (ValidationException) { } // Try again without the invalid document B. this.DocumentStore.AddDocuments(new[] { documentA, documentC }); This will again add document A to the object context and in consequence SaveChanges() will throw an exception because of a duplicate primary key. So I have to remove all already added documents in the case of an validation error. I could of course perform the validation first and only add all documents after they have been successfully validated. But sadly this does not solve the whole problem - if SaveChanges() fails all documents still remain add but unsaved. I tried to detach all objects returned by this.objectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added) but I am getting a exception stating that the object is not attached. So how do I get rid of all added but unsaved objects?

    Read the article

  • Problem loading XMLDocument with non standard tags

    - by David Conde
    Hi, I have a code needed to load an XML document from a reader, something like this: private static XmlDocument GetDocumentStream(string xmlAddress) { var settings = new XmlReaderSettings(); settings.DtdProcessing = DtdProcessing.Ignore; settings.ValidationFlags = XmlSchemaValidationFlags.None; var reader = XmlReader.Create(xmlAddress, settings); document.Load(reader); return document; } But in my XML document, I have nodes like this one: <link rel="edit-media" title="Package" href="Packages(Id='51Degrees.mobi',Version='0.1.11.9')/$value" /> Is to my understanding that the node should be like <link rel="edit-media" title="Package"></link> But, I don't create the Xml document and I certainly don't want to change it, but when I try to load the XML document, the document.Load line throws an exception. To be more specific, the XML file is the RSS source for the nuPack project. Any ideas would be very appreaciated on how to be able to read this document properly.

    Read the article

  • Java parsing XML document gives "Content not allowed in prolog." error

    - by thechiman
    I am writing a program in Java that takes a custom XML file and parses it. I'm using the XML file for storage. I am getting the following error in Eclipse. [Fatal Error] :1:1: Content is not allowed in prolog. org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:239) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:283 ) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:208) at me.ericso.psusoc.RequirementSatisfier.parseXML(RequirementSatisfier.java:61) at me.ericso.psusoc.RequirementSatisfier.getCourses(RequirementSatisfier.java:35) at me.ericso.psusoc.programs.RequirementSatisfierProgram.main(RequirementSatisfierProgram.java:23 ) The beginning of the XML file is included: <?xml version="1.0" ?> <PSU> <Major id="IST"> <name>Information Science and Technology</name> <degree>B.S.</degree> <option> Information Systems: Design and Development Option</option> <requirements> <firstlevel type="General_Education" credits="45"> <component type="Writing_Speaking">GWS</component> <component type="Quantification">GQ</component> The program is able to read in the XML file but when I call DocumentBuilder.parse(XMLFile) to get a parsed org.w3c.dom.Document, I get the error above. It doesn't seem to me that I have invalid content in the prolog of my XML file. I can't figure out what is wrong. Please help. Thanks.

    Read the article

  • How to determine the (natural) language of a document?

    - by Robert Petermeier
    I have a set of documents in two languages: English and German. There is no usable meta information about these documents, a program can look at the content only. Based on that, the program has to decide which of the two languages the document is written in. Is there any "standard" algorithm for this problem that can be implemented in a few hours' time? Or alternatively, a free .NET library or toolkit that can do this? I know about LingPipe, but it is Java Not free for "semi-commercial" usage This problem seems to be surprisingly hard. I checked out the Google AJAX Language API (which I found by searching this site first), but it was ridiculously bad. For six web pages in German to which I pointed it only one guess was correct. The other guesses were Swedish, English, Danish and French... A simple approach I came up with is to use a list of stop words. My app already uses such a list for German documents in order to analyze them with Lucene.Net. If my app scans the documents for occurrences of stop words from either language the one with more occurrences would win. A very naive approach, to be sure, but it might be good enough. Unfortunately I don't have the time to become an expert at natural-language processing, although it is an intriguing topic.

    Read the article

  • RETRIVE XPATH OF AN XML ELELEMNT BY USING ITS VALUE

    - by user299938
    My XmlFile looks like this: <?xml version="1.0"?> <document-inquiry> <publication-reference data-format="docdb" xmlns="http://www.epo.org/exchange"> <document-id> <country>EP</country> <doc-number>2160088</doc-number> <kind>A1</kind> </document-id> </publication-reference> </document-inquiry> For the above xml i need to get the xpath of a specific element say for example "country element" as My Output: "/document-inquiry/publication-reference/document-id/country" My Input : By using its value "EP" This is the code i tried doc.SelectSingleNode("/document-inquiry/publication-reference/document-id[text()='EP']"); I receivev null for the above code. I have to get it using the c# code. Can anyone pls help me on this

    Read the article

  • javascript function won't stop looping - It's on a netsuite website

    - by Lauren
    I need to change the shipping carrier drop-down and shipping method radio button once via a javascript function, not forever. However, when I use this function, which executes when on the Review and Submit page when the order is < $5, it goes into an endless loop: function setFreeSampShipping(){ var options = document.forms['checkout'].shippingcarrierselect.getElementsByTagName('option'); for (i=0;i<options.length;i++){ if (options[i].value == 'nonups'){ document.forms['checkout'].shippingcarrierselect.value='nonups'; document.forms['checkout'].shippingcarrierselect.onchange(); document.location.href='/app/site/backend/setshipmeth.nl?c=659197&n=1&sc=4&sShipMeth=2035'; } } } It gets called from within this part of the function setSampPolicyElems() which you can see I've commented out to stop the loop: if (carTotl < 5 && hasSampp == 1) { /* if (document.forms['checkout'].shippingcarrierselect.value =='ups'){ setFreeSampShipping(); } */ document.getElementById('sampAdd').style.display="none"; document.getElementById("custbody_ava_webshiptype").value = "7";//no charge free freight if (document.getElementById("applycoupon")) { document.location.href='/app/site/backend/setpromocode.nl?c=659197&n=1&sc=4&kReferralCode=SAMPLE' } document.write("<div id='msg'><strong>Sample & Shipping:</strong></span> No Charge. (Your sample order is < $5.)</div>"); } To see the issue, go to the order review and submit page here: https://checkout.netsuite.com/s.nl?c=659197&sc=4&n=1 (or go to http://www.avaline.com, hit "checkout", and login) You can log in with these credentials: Email:[email protected] Pass:test03

    Read the article

  • EntityFramework repository template- how to write GetByID lamba within a template class?

    - by FerretallicA
    I am trying to write a generic one-size-fits-most repository pattern template class for an Entity Framework-based project I'm currently working on. The (heavily simplified) interface is: internal interface IRepository<T> where T : class { T GetByID(int id); IEnumerable<T> GetAll(); IEnumerable<T> Query(Func<T, bool> filter); } GetByID is proving to be the killer. In the implementation: public class Repository<T> : IRepository<T>,IUnitOfWork<T> where T : class { // etc... public T GetByID(int id) { return this.ObjectSet.Single<T>(t=>t.ID == id); } t=t.ID == id is the particular bit I'm struggling with. Is it even possible to write lamba functions like that within template classes where no class-specific information is going to be available?

    Read the article

  • Write a compiler for a language that looks ahead and multiple files?

    - by acidzombie24
    In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example. class A { public void callMethods() { print(); B b; b.notYetSeen(); public void print() { Console.Write("v = {0}", v); } int v=9; } class B { public void notYetSeen() { Console.Write("notYetSeen()\n"); } } How should I compile that? what i was thinking is: pass1: convert everything to an AST pass2: go through all classes and build a list of define classes/variable/etc pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way. Note: I am using bison/flex to generate my compiler.

    Read the article

  • SharePoint 2007 / 2010 Content Indexing &ldquo;The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.&rdquo;

    - by Stacy Vicknair
    If you have large files in a content source that is being indexed by Sharepoint you might run into the following error message: “The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.” This is usually caused because SharePoint’s MaxDownloadSize setting is set lower than the size of the file you are attempting to index. You can increase this value, restart the service then kick off a full crawl in order to fix this issue, but SharePoint 2007 and 2010 have different methods for accomplishing this task.   Sharepoint 2007 Open up the Registry editor and increase the MaxDownloadSize value to a number (in MB) higher than the largest file being indexed. You can find this at: HKEY_LOCAL_MACHINE\Software\Microsoft\Search\1.0\Gathering Manager After you increase the size, cycle the search service and kick off a full crawl of the content source in question.   Sharepoint 2010 With SharePoint 2010 you can use PowerShell via the Sharepoint 2010 Console in order to change the MaxDownloadSize. Execute the following commands to update the value: 1: $ssa = Get-SPEnterpriseSearchServiceApplication 2: $ssa.SetProperty(“MaxDownloadSize”, <new size in MB>) 3: $ssa.Update()   References: http://support.microsoft.com/kb/287231 http://blogs.technet.com/b/brent/archive/2010/07/19/sharepoint-server-2010-maxdownloadsize-and-maxgrowfactor.aspx   Technorati Tags: SharePoint,WSS,MaxDownloadSize,Search

    Read the article

  • How to open a document using an application launched via NSTask?

    - by zneak
    Hello world, I've grown tired of the built-in open Mac OS X command, mostly because it runs programs with your actual user ID instead of the effective user ID; this results in the fact sudo open Foo opens Foo with its associated application with your account instead of the root account, and it annoys me. So I decided to make some kind of replacement. So far I've been successful: I can open any program under the open -a or open -b fashion, and support optionally waiting. I'm using NSTask for that purpose. However, I'd like to be able to open documents too. As far as I can see, you need to use NSWorkspace for that, but using NSWorkspace to launch programs results in them being launched with your account's credentials instead of your command line program's credentials. Which is precisely what the default open tool does, and precisely what I don't want. So, how can I have a program request that another program opens a document without using NSWorkspace? From the NSTask object, I can have the process ID, but that's about it.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >