Search Results

Search found 37135 results on 1486 pages for 'html tables'.

Page 85/1486 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Tools and ways to generate HTML help for built-in help system (QtHelp)?

    - by BastiBense
    Hello, I'm in the progress of implementing a built-in help system based on QtHelp into my application. Since QtHelp is based on Qt's help collection files, I need to produce a set of HTML pages. Since I won't be writing the documentation alone (a few of my colleagues will write, too), I am looking for the best way to produce these files. We are internally using a Wiki, and I know that the documentation should be written in some kind of markup language instead of giving all authors a WYSIWYG HTML editor. So my question is, are there tools out there which help with the process of generating documentation that can be exported as a set of HTML files, and possibly, as PDFs, too?. Thanks in advance! Update: I'm already using Doxygen for C++ documentation generation. But I'm not exactly looking for an API-Documentation generator, but something like LaTeX, which allows you to format the documentation contents like a markup document (much like a Wiki).

    Read the article

  • Can i add some HTML in my SO Profile 'About me' text box to show some button from linkedin?

    - by goldenmean
    Hello, In my SO profile, i tried to do Edit profile and there in "About Me" text box, i tried to paste a HTML code as below provided by linkedin.com to display a button which would link to my profile on linkedin. But it does not seem to work on SO. This is the html code: <a href="http://in.linkedin.com/in/ajitsdeshpande" > <img src="http://www.linkedin.com/img/webpromo/btn_myprofile_160x33.png" width="160" height="33" border="0" alt="View Ajit Deshpande's profile on LinkedIn"> </a> 1] When i checked the html tags allowed by SO, i found i had only the whitelisted tags allowed. So does this kind of thing i am trying to do work on SO. 2] If yes how can i get it working? thanks, -AD.

    Read the article

  • Should one use PHP to print all of a page's HTML?

    - by Mala
    Hi So I've always developed PHP pages like this: <?php goes at the top, ?> goes at the bottom, and all the HTML gets either print()ed or echo()ed out. Is that slower than having non-dynamic html outputted outside of <?php ?> tags? I can't seem to find any info about this. Thanks! --Mala UPDATE: the consesus seems to be on doing it my old way being hard to read. This is not the case if you break your strings up line by line as in: print("\n". "first line goes here\n". "second line goes here\n". "third line"); etc. It actually makes it a lot easier to read than having html outside of php structures, as this way everything is properly indented. That being said, it involves a lot of string concatenation.

    Read the article

  • Scrapped HTML does not written at the beginning of text file.

    - by karikari
    Currently, I'm scraping the HTML code of a page, and write it to a txt file. My problem is, why there must be empty spaces or empty lines at the beginning? The HTML codes written to the txt file, seems does not start at the beginning of the txt file. Means, the '<' does not located at the position 0 of the txt file. After a few runs, my HTML is always written a few lines down inside the txt file. Can anyone tell me why : )

    Read the article

  • Why should I use a container div in HTML?

    - by lara.robertson
    I am currently learning html/css, and have noticed a common technique is to place a generic container div in the root of the body tag: <html> <head> ... </head> <body> <div id="container"> ... </div> </body> </html> Is there a valid reason for doing this? Why can't the css just reference the body tag?

    Read the article

  • Modular HTML in Adobe AIR? Is It Possible?

    - by Greg Bulmash
    When I write HTML with a PHP backend, I usually have a single header file and a single footer file with some PHP variables in them. The base skeleton for every page calls the header, contains the body content, and then calls the footer. That way, if I want to make changes to the header sitewide, I change one file. I'm working on developing the UI in my first Adobe AIR app and I'm wondering if there's some way to include such files in an HTML based page template there. Obviously, with the file read/write abilities in AIR, I can write a Javascript routine to pull data from a header file, parse it, and inject it into a placeholder. It just seems like such a kludge. I'm thinking there's gotta be some simple way to import a block of HTML into a page without an iFrame or complex post-processor. Something like a PHP include statement or perhaps the old Server Side Includes. Any methods you guys can recommend?

    Read the article

  • How to break up HTML documents into pages for ebook?

    - by radnoise
    For an iPhone ebook application I need to break arbitrarily long HTML documents up into pages which fit exactly on one screen. If I simply use UIWebView for this, the bottom-most lines tend to get displayed only partly: the rest disappears off the edge of the view. So I assume I would need to know how many complete lines (or characters) would be displayed by the UIWebView, given the source HTML, and then feed it exactly the right amount of data. This probably involves lots of calculation, and the user also needs to be able to change fonts and sizes. I have no idea if this is even possible, although apps like Stanza take HTML (epub) files and paginate them nicely. It's a long time since I looked at JavaScript, would that be an option worth looking at? Any suggestions very much appreciated!

    Read the article

  • In Python, how do I remove the "root" tag in an HTML snippet?

    - by Chung Wu
    Suppose I have an HTML snippet like this: <div> Hello <strong>There</strong> <div>I think <em>I am</em> feeing better!</div> <div>Don't you?</div> Yup! </div> What's the best/most robust way to remove the surrounding root element, so it looks like this: Hello <strong>There</strong> <div>I think <em>I am</em> feeing better!</div> <div>Don't you?</div> Yup! I've tried using lxml.html like this: lxml.html.fromstring(fragment_string).drop_tag() But that only gives me "Hello", which I guess makes sense. Any better ideas?

    Read the article

  • How do i get access to the <html> element in javascript?

    - by kwyjibo
    IE displays a default scrollbar on the page, which appears even if the content is too short to require a scrollbar. The typical way to remove this scrollbar (if not needed), is to add this to your css: HTML { height: 100%; overflow: auto; } I'm trying to do the same thing in javascript (without requiring that in my CSS), but i can't seem to find a way to get access to the html element. I know i can access the body element with document.body, but that doesn't seem to be sufficient, i need the wrapping html element. Any tips?

    Read the article

  • Will I use HtmlDocument even I want to parse the HTML string using HtmlAglityPack ?

    - by skhan
    Hi everyone, I'm working in C#. I'm trying to extract the first instance of img tag from a HTML string (which is actually a post data). This is my code: private string GrabImage(string htmlContent) { String firstImage; HtmlAgilityPack.HtmlDocument htmlDoc = new HtmlAgilityPack.HtmlDocument(); htmlDoc.LoadHtml(htmlContent); HtmlAgilityPack.HtmlNode imageNode = htmlDoc.DocumentNode.SelectSingleNode("//img"); if (imageNode != null) { return firstImage = imageNode.ToString(); } else return firstImage=" "; } But it gets null in htmlDoc, will I use the HtmlDocument type even if I'm trying to parse the HTML from a string ? P.S btw is it the correct way of grabbing the first instance of image tag from my HTML string?

    Read the article

  • How does one validate html that's generated from JS running in the browser?

    - by Henry Rose
    The page in question has very skeletal html sent over the wire to facilitate the building of a complicated UI in javascript. I'm now encountering a strange browser compatibility issue that feels very much like I've got a markup problem somewhere on the page. I've validated the page as it comes across the wire using the W3C tool and ensured there are no issues in that html. I've also tried validating the output of running on the browser console: document.getElementsByTagName('html')[0].outerHTML I find that the output of the above introduces lots of new issues, such as removing the trailing '/' in self closing tags. This added noise is distracting, but it also makes me uneasy about validating this method. How do you validate markup that's rendered client side?

    Read the article

  • How can I send an html email with perl?

    - by alexBrand
    I am trying to send an HTML email using perl. open(MAIL,"|/usr/sbin/sendmail -t"); ## Mail Header print MAIL "To: $to\n"; print MAIL "From: $from\n"; print MAIL "Subject: $subject\n\n"; ## Mail Body print MAIL "Content-Type: text/html; charset=ISO-8859-1\n\n<html><head></head><body>@emailBody"; close(MAIL) Is that the correct way of doing it? It is not working for some reason. Thanks for your help.

    Read the article

  • What is the order of loading the CSS files in a HTML page ?

    - by Purushotham
    I want to know the order of loading the CSS files in a HTML page. My actual requirement is like this: I have more than 10 CSS files in my application. I am importing some 3 to 4 CSS files in each HTML page. The problem is I have duplicate classes that defined in some CSS files. That means I override some of the CSS classes in the CSS files. In some pages it behaves correctly. In some pages it behaves wrongly. I have inline styles defined for some of the DIVs in HTML page also. I am keeping CSS class for that DIVs also. Can anyone know which one will take higher priority or which one loads first ?

    Read the article

  • Why exactly is server side HTML rendering faster than client side?

    - by mvbl fst
    I am working on a large web site, and we're moving a lot of functionality to the client side (Require.js, Backbone and Handlebars stack). There are even discussions about possibly moving all rendering to the client side. But reading some articles, especially ones about Twitter moving away from client side rendering, which mention that server side is faster / more reliable, I begin to have questions. I don't understand how rendering fairly simple HTML widgets in JS from JSON and templates is a contemporary browser on a dual core CPU with 4-8 GB RAM is any slower than making dozens of includes in your server side app. Are there any actual real life benchmarking figures regarding this? Also, it seems like parsing HTML templates by server side templating engines can't be any faster than rendering same HTML code from a Handlebars template, especially if this is a precomp JS function?

    Read the article

  • If you had to work with horrible HTML, what would you do?

    - by Doug
    I was looking over some of my friend's HTML and CSS, and I was speechless (in a bad way). If I had to work with that, such as putting AJAX into it, then it would have been a lot of work. I would have to rebuild a lot of the HTML aspects, otherwise it wouldn't work well with the AJAX. What would you do in a situation like so? Would you just edit at the minimum? Would you do a overhaul and redo the whole HTML aspect? Would you go back to the client and ask for more money because it was a lot more work than expected? I'm interested in strategies and approaches and how it's done out in the field.

    Read the article

  • WebKit & Objective-C: how to parse a HTML string into a DOMDocument?

    - by Rinzwind
    How do you get a DOMDocument from a given HTML string using WebKit? In other words, what's the implementation for DOMDocumentFromHTML: for something like the following: NSString * htmlString = @"<html><body><p>Test</body></html>"; DOMDocument * document = [self DOMDocumentFromHTML: htmlString]; DOMNode * bodyNode = [[document getElementsByTagName: @"body"] item: 0]; // ... etc. This seems like it should be straightforward to do, yet I'm still having trouble figuring out how :( ...

    Read the article

  • How to add attributes to a HTML element in a valid way?

    - by Click Upvote
    I want to be able to add an attribute to a HTML element to be able to identify what its referring to. E.g if I have a list of names and a checkbox next to each name, like this: <div id="users"> Bob smith <input type=checkbox /> </div> And when a checkbox is clicked and the event handler function for it is called, I want to be able to identify which user was selected/unselected. Ideally I'm looking for something like this: <input type=checkbox data-userId = "xxx" /> Then when its clicked: function handleClick() { var userId = $(this).attr('data-userId'); } However I'm looking to do this in a way that won't break my HTML validation, and would still be valid HTML and work in all browsers. Any suggestions?

    Read the article

  • How can I disallow all HTML using markdown (PHP and JS)?

    - by peterjwest
    I'm using the PHP markdown library: http://michelf.com/projects/php-markdown/ and the Javascript markdown library: http://attacklab.net/showdown/ I want to disallow all HTML, both the versions of markdown seem to allow it indiscriminately. My first attempt was simply to escape all html entities before feeding into markdown. However this also escapes the <hyperlink> and <email> syntax, which is very useful. I'd like to escape all HTML (not remove) but preserve all markdown syntax.

    Read the article

  • How To Delete Top 100 Rows From SQL Server Tables

    - by Gopinath
    If you want to delete top 100/n records from an SQL Server table, it is very easy with the following query: DELETE FROM MyTable WHERE PK_Column IN(     SELECT TOP 100 PK_Column     FROM MyTable     ORDER BY creation    ) Why Would You Require To Delete Top 100 Records? I often delete a top n records of a table when number of rows in the are too huge. Lets say if I’ve 1000000000 records in a table, deleting 10000 rows at a time in a loop is faster than trying to delete all the 1000000000  at a time. What ever may be reason, if you ever come across a requirement of deleting a bunch of rows at a time, this query will be helpful to you. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Export mysql database tables to php code to create same tables in other database?

    - by chefnelone
    How do I Export mysql database tables to php code so that it allows me to create and populate same tables in other database? I have a local database, I exported to sql syntax, then I get something like: CREATE TABLE `boletinSuscritos` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(120) NOT NULL, `email` varchar(120) NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ; INSERT INTO `boletinSuscritos` VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12'); INSERT INTO `boletinSuscritos` VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56'); but I need it to be: (Is there any way to export the tables in this way) $sql = "CREATE TABLE boletinSuscritos ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(120) NOT NULL, email varchar(120) NOT NULL, date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY ( id ) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 )"; mysql_query($sql,$conexion); mysql_query("INSERT INTO boletinSuscritos VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12')"); mysql_query("INSERT INTO boletinSuscritos VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56')");

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • Part 4, Getting the conversion tables ready for CS to DNN

    - by Chris Hammond
    This is the fourth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >