Search Results

Search found 24498 results on 980 pages for 'lock pages in memory'.

Page 330/980 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Which combining css technique?

    - by DotnetShadow
    Hi there, Which of the following would you say is the best way to go when combining files for CSS: Say I have a master.css file that is used across all pages on my website (page1.aspx, page2.aspx) Page1.aspx - A specific page that has some unique css that is only ever used on that page, so I create a page1.css and it also uses another css grids.css Page2.aspx - Another specific page that is different from all other pages on the site and is different to page1.aspx, I'll name this page2.aspx and make a page2.css this doesn't use grids.css So would you combine the scripts as: Option1: Combine scripts csshandler.axd?d=master.css,page1.css,grids.css when visiting page1 Combine scripts csshandler.axd?d=master.css,page2.css when visiting page2 Benefits: Page specific, rendering quicker since only selectors for that page need to be matched up no unused selectors Drawback: Multiple combinations of master.css + page specific hence master.css has to be downloaded for each page Option2: Combine all scripts whether a page needs them or not csshandler.axd?d=master.css,page1.css,page2.css,grids.css (master, page1 and page2) that way it gets cached as one. The problem is that rendering maybe slower since it will have to try and match EVERY selector in the css with selectors on the page even the missing ones, so in the case of page2.aspx that doesn't use grids.css the selectors in grids.css will need to be parsed to see if they are in page2 which means rendering will be slow Benefits: One file will ever be downloaded and cached doesn't matter what page you visit Drawback: Unused selectors will need to be parsed by the browser slower rendering Option3: Leave the master file on it's own and only combine other scripts (the benefit of this is because master is used across all pages there is a chance that this is cached so doesn't need to keep on downloading csshandler.axd?d=Master.css csshandler.axd?d=page1.css,grids.css Benefits: master.css file can be cached doesn't matter what page you visit. Not many unused selectors as page spefic is applied Drawback: Initially minimum of 2 HTTP request will have to be made What do you guys think? Cheers DotnetShadow

    Read the article

  • Where are tables in Mnesia located?

    - by Sanoj
    I try to compare Mnesia with more traditional databases. As I understand it tables in Mnesia can be located to: ram_copies - tables are stored in RAM only, so no durability as in ACID. disc_copies - tables are located on disc and a copy is located in RAM, so the table can not be bigger than the available memory? disc_only_copies - tables are located to disc only, so no caching in memory and worse performance? And the size of the table are limited to the size of dets or the table has to be fragmented. So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL. I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database? The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance. Are disc_copies and the other types of tables in Mnesia so limited as I have understood?

    Read the article

  • What file format can represent an uncompressed raster image at 48 or 64 bits per pixel?

    - by finnw
    I am creating screenshots under Windows and using the LockBits function from GDI+ to extract the pixel data, which will then be written to a file. To maximise performance I am also: Using the same PixelFormat as the source bitmap, to avoid format conversion Using the ImageLockModeUserInputBuf flag to extract the pixel data into a pre-allocated buffer This pre-allocated buffer (pointed to by BitmapData::Scan0) is part of a memory-mapped file (to avoid copying the pixel data again.) I will also be writing the code that reads the file, so I can use (or invent) any format I wish. However I would prefer to use a well-known format that existing programs (ideally web browsers) are able to read, because that means I can visually confirm that the images are correct before writing the code for the other program (that reads the image.) I have implemented this successfully for the PixelFormat32bppRGB format, which matches the format of a 32bpp BMP file, so if I extract the pixel data directly into the memory-mapped BMP file and prefix it with a BMP header I get a valid BMP image file that can be opened in Paint and most browsers. Unfortunately one of the machines I am testing on returns pixels in PixelFormat64bppPARGB format (presumably this is influenced by the video adapter driver) and there is no corresponding BMP pixel format for this. Converting to a 16, 24 or 32bpp BMP format slows the program down considerably (as well as being lossy) so I am looking for a file format that can use this pixel format without conversion, so I can extract directly into the memory-mapped file as I have done with the 32bpp format. What raster image file formats support 48bpp and/or 64bpp?

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • WCF Certificates without Certificate Store

    - by Kane
    My team is developing a number of WPF plug-ins for a 3rd party thick client application. The WPF plug-ins use WCF to consume web services published by a number of TIBCO services. The thick client application maintains a separate central data store and uses a proprietary API to access the data store. The thick client and WPF plug-ins are due to be deployed onto 10,000 workstations. Our customer wants to keep the certificate used by the thick client in the central data store so that they don't need to worry about re-issuing the certificate (current re-issue cycle takes about 3 months) and also have the opportunity to authorise the use of the certificate. The proposed architecture offers a form of shared secret / authentication between the central data store and the TIBCO services. Whilst I don’t necessarily agree with the proposed architecture our team is not able to change it and must work with what’s been provided. Basically our client wants us to build into our WPF plug-ins a mechanism which retrieves the certificate from the central data store (which will be allowed or denied based on roles in that data store) into memory then use the certificate for creating the SSL connection to the TIBCO services. No use of the local machine's certificate store is allowed and the in memory version is to be discarded at the end of each session. So the question is does anyone know if it is possible to pass an in-memory certificate to a WCF (.NET 3.5) service for SSL transport level encryption? Note: I had asked a similar question (here) but have since deleted it and re-asked it with more information.

    Read the article

  • Debugging NSoperation BAD ACCESS within graphics context

    - by Joe
    I tried everything to debug this one but I can't get to the bottom of it. This code lives in a subclass of NSOperation which is processed from a queue: (borders is an ivar NSArray containing 5 UIimage objects) NSMutableArray *images = [[NSMutableArray alloc] init]; for (unsigned i = 0; i < 5; i++) { CGSize size = CGSizeMake(60, 60); UIGraphicsBeginImageContext(size); CGPoint thumbPoint = CGPointMake(6, 6); [controller.image drawAtPoint:thumbPoint]; CGPoint borderPoint = CGPointMake(0, 0); [[borders objectAtIndex:i] drawAtPoint:borderPoint]; [images addObject:UIGraphicsGetImageFromCurrentImageContext()]; UIGraphicsEndImageContext(); } [images release]; The code works fine most of the time but when I push the iphone by access subviews and pressing lots of buttons on the UI I either get this exception which is trapped by the operation: Exception Load view: *** -[NSCFArray insertObject:atIndex:]: attempt to insert nil or I get this: Program received signal: “EXC_BAD_ACCESS”. The exception is caused because UIGraphicsGetImageFromCurrentImageContext() return nil. I don't know how to debug the EXC_BAD_ACCESS but I'm guessing that this error (in fact both of these errors) is caused by low memory. The debugger stops at the line: [controller.image drawAtPoint:thumbPoint]; As I mentioned I've trapped the exception so I can live with that but the EXC_BAD_ACCESS is more serious. IF this is memory related how can I tell and is it possible to increase the memory available to NSOperation?

    Read the article

  • GWT forced height HTMLPanel

    - by Nils
    Hello, I'm developing a GWT project, and I encountered a problematic cross-browsering problem. When using firefox, there are problems with the display of all the pages. I found the reason why : In UIBinder, each of my pages are wrapped by a "g:HTMLPanel" : at start and at the end of the xml file, to wrap the content of all the pages When doing this, the generated code of the panel goes like this : div style="width: 100%; height: 100%; .... The problem is that "height : 100%". If I remove it with firebug, the display is perfect. So my goal is to programatically remove that generated 100% height.. But no way to do it ! I tried everything : setHeight, setSize, working on the Element itself with getElement().methods()... I tried to do things like style.clear(), everything that could have a chance to work.. But in the generated code that "height: 100%" will ALWAYS be there. If I set it's height to "50%" or "50px" it has no effect at all. I even tried to give it an ID, then with pure javascript to change it's style, but no solution either.. Note : I'm sure that I'm working on the right element : adding a styleName, for example, works well. Any idea ? Your help would be really appreciated, I have no clue of how to remove this bit of generated code, and I've been looking for hours already :(:(:(:( Best regards, Nils

    Read the article

  • Suggestions on how build an HTML Diff tool?

    - by Danimal
    In this post I asked if there were any tools that compare the structure (not actual content) of 2 HTML pages. I ask because I receive HTML templates from our designers, and frequently miss minor formatting changes in my implementation. I then waste a few hours of designer time sifting through my pages to find my mistakes. The thread offered some good suggestions, but there was nothing that fit the bill. "Fine, then", thought I, "I'll just crank one out myself. I'm a halfway-decent developer, right?". Well, once I started to think about it, I couldn't quite figure out how to go about it. I can crank out a data-driven website easily enough, or do a CMS implementation, or throw documents in and out of BizTalk all day. Can't begin to figure out how to compare HTML docs. Well, sure, I have to read the DOM, and iterate through the nodes. I have to map the structure to some data structure (how??), and then compare them (how??). It's a development task like none I've ever attempted. So now that I've identified a weakness in my knowledge, I'm even more challenged to figure this out. Any suggestions on how to get started? clarification: the actual content isn't what I want to compare -- the creative guys fill their pages with lorem ipsum, and I use real content. Instead, I want to compare structure: <div class="foo">lorem ipsum<div> is different that <div class="foo"><p>lorem ipsum<p><div>

    Read the article

  • using JQuery and Prototype in the same page

    - by Don
    Hi, Several of my pages use both JQuery and Protoype. Since I upgraded to version 1.3 of JQuery this appears to be causing problems, because both libraries define a function named '$'. JQuery provides a function noConflict() which relinquishes control of $ to other libraries that may be using it. So it seems like I need to go through all my pages that look like this: <head> <script type="text/javascript" src="/obp/js/prototype.js"></script> <script type="text/javascript" src="/obp/js/jquery.js"></script> </head> and change them to look like this: <head> <script type="text/javascript" src="/obp/js/prototype.js"></script> <script type="text/javascript" src="/obp/js/jquery.js"></script> <script type="text/javascript"> jQuery.noConflict(); var $j = jQuery; </script> </head> I should then be able to use '$' for Prototype and '$j' (or 'jQuery') for JQuery. I'm not entirely happy about duplicating these 2 lines of code in every relevant page, and expect that at some point somebody is likely to forget to add them to a new page. I'd prefer to be able to do the following Create a file jquery-noconflict.js which "includes" jquery.js and the 2 lines of code shown above Import jquery-noconflict.js (instead of jquery.js) in all my JSP/HTML pages However, I'm not sure if it's possible to include one JS file in another, in the manner I've described? Of course an alternate solution is simply to add the 2 lines of code above to jquery.js directly, but if I do that I'll need to remember to do it every time I upgrade JQuery. Thanks in advance, Don

    Read the article

  • MVC view engine that can render classic ASP?

    - by David Lively
    I'm about to start integrating ASP.NET MVC into an existing 200k+ line classic ASP application. Rewriting the existing code (which runs an entire business, not just the public-facing website) is not an option. The existing ASP 3.0 pages are quite nicely structured (given what was available when this model was put into place nearly 15 years ago), like so: <!--#include virtual="/_lib/user.asp"--> <!--#include virtual="/_lib/db.asp"--> <!--#include virtual="/_lib/stats.asp"--> <!-- #include virtual="/_template/topbar.asp"--> <!-- #include virtual="/_template/lftbar.asp"--> <h1>page name</h1> <p>some content goes here</p> <p>some content goes here</p> <p>.....</p> <h2>Today's Top Forum Posts</h2> <%=forum_top_posts(1)%> <!--#include virtual="/_template/footer.asp"--> ... or somesuch. Some pages will include function and sub definitions, but for the most part these exist in include files. I'm wondering how difficult it would be to write an MVC view engine that could render classic ASP, preferably using the existing ASP.DLL. In that case, I could replace <!--#include virtual="/_lib/user.asp"--> with <%= Html.RenderPartial("lib/user"); %> as I gradually move existing pages from ASP to ASP.NET. Since the existing formatting and library includes are used very extensively, they must be maintained. I'd like to avoid having to maintain two separate versions. I know the ideal way to go about this would be to simply start a new app and say to hell with the ASP code, however, that simply ain't gonna happen. Ideas?

    Read the article

  • CakePHP's list problem

    - by jun
    Hi there. I have this table in my DB: Group - ID-Name - 1 -abc - 2 -def - 3 -ghi Pages - id-group_id-name - 1 -1 -home - 2 -1 -about us Now I wanted to make a select box that groups them by 'group' using: function add() { $this->set('pages', $this->Page->find('list', array('fields' => array('Page.id', 'Page.name', 'Page.group_id')))); } In my add.ctp: echo $form->input('group_id', array('options' => $pages)); The output: <select name="data[Page][id]" id="PageId"> <optgroup label="1"> <option value="1">Home</option> <option value="2">About Us</option> </optgroup> </select> I wanted the optgroup to display the actual group name not the group id like: <select name="data[Page][id]" id="PageId"> <optgroup label="abc"> <option value="1">Home</option> <option value="2">About Us</option> </optgroup> </select> I have tried this one: $this->Page->find('list', array('conditions' => 'Group.id = Page.id', 'fields' => array('Page.id', 'Page.name', 'Group.name'))); But 'Group.id' and 'Group.name' is unknown. Thanks!

    Read the article

  • WPF compile error "IDictionary must have a Key attribute"

    - by the empirical programmer
    I've created control styles I want to use among multiple xaml pages in my WPF app. To do this I created a Resources.xaml and added the styles there. Then in my pages I add this code <Grid.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="pack://application:,,,/SampleEventTask;component/Resources.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Grid.Resources> On two pages this works fine, but on the 3rd page I get a compile error that says: All objects added to an IDictionary must have a Key attribute or some other type of key associated with them. If I add a key to this, as such ResourceDictionary x:Key="x", then the compile error goes but on running the app it errors finding the style. I can make the compile error go away and have the app run by just moving original (no key specified) "ResourceDictionary" xaml from the top level Grid into a contained Grid on that page. But I don't understand what is going on here. Any suggestions as to what the problem is, I'm just missing something or doing something incorrectly. Is there a better way to share styles? thanks

    Read the article

  • Windows Phone - OnNavigatingFrom - problem

    - by Martin Anderson
    I believe this is just a problem for me, due to my lack of programming ability. I am currently exploring transitions between page navigation with Windows Phone apps. I was originally using storyboards, and completed event handlers to have my pages animate on and off screen. These leads to the a problem when you want to navigate to many pages from one page, using the same transition. So I have started looking at OnNavigatedTo, and OnNavigatingFrom events and while it is working nicely for OnNavigatedTo, the later just wont work. It seems the Microsoft.Phone.Navigation assembly does not contain OnNavigatingFrom, and referencing System.Windows.Navigation, compiles ok, but I cant get pages to animate off upon navigation. I have a button on my Page2, which I want to go back to my MainPage (after I over-rided the back key with a message box for testing). I have transitions made on the page, and I have this as the event handler code... private void btnP2_BackToP1Clicked(object sender, System.Windows.RoutedEventArgs e) { NavigationService.Navigate(new Uri("/MainPage.xaml", UriKind.Relative)); } With this code for the OnNavigatedTo and OnNavigatingFrom events... protected override void OnNavigatedTo(PhoneNavigationEventArgs e) { PageTransition_In.Begin(); } // // protected override void OnNavigatingFrom(NavigatingCancelEventArgs e) { PageTransition_Out.Begin(); base.OnNavigatingFrom(e); } I have the feeling that OnNavigatingFrom may not(yet) be supported for Windows Phone Apps. OnNavigatedFrom is part of Microsoft.Phone.Navigation, but it only performs actions once the page is no longer active, which is too late to perform any animation effects.

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • Ruby page loading very very slowly - how should I speed it up?

    - by Elliot
    Hey guys, I'm going to try and describe the code in my view, without actually posting all the garbage: It has a standard shell (header, footer etc. in the layout) this is also where the sub navigation exists which is based on a loop (to find the amount of options) - on this page, we have 6 subnav links. Then in the index view, we have a 3rd level nav - with 3 links that use javascript to link/hide divs on the page. This means each of those original 6 options, all have their own 3'rd level nav, with each of their own 3 div pages. These three pages/divs have the input form for creating a record in rails, and then the other 2 pages show the records in different assortments. ALL of this code lives on one page (aside from the shell). The original sub nav uses a javascript tab solution, to browse through all of it... (this means its about 6 divs, which all contain 4 divs of function - so about 24 heavy divs). Loading it seems to take forever, although after loaded its extremely fast (obviously). My big question, is how should I attack this? I don't know ajax - although I imagine it'd be a good solution for loading the tabs when clicked. Thanks! Elliot

    Read the article

  • Android:Playing bigger size audio wav sound file produces crash

    - by user187532
    Hi Android experts, I am trying to play the bigger size audio wav file(which is 20 mb) using the following code(AudioTrack) on my Android 1.6 HTC device which basically has less memory. But i found device crash as soon as it executes reading, writing and play. But the same code works fine and plays the lesser size audio wav files(10kb, 20 kb files etc) very well. P.S: I should play PCM(.wav) buffer sound, the reason behind why i use AudioTrack here. Though my device has lesser memory, how would i read bigger audio files bytes by bytes and play the sound to avoid crashing due to memory constraints. private void AudioTrackPlayPCM() throws IOException { String filePath = "/sdcard/myWav.wav"; // 8 kb file byte[] byteData = null; File file = null; file = new File(filePath); byteData = new byte[(int) file.length()]; FileInputStream in = null; try { in = new FileInputStream( file ); in.read( byteData ); in.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } int intSize = android.media.AudioTrack.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT); AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, intSize, AudioTrack.MODE_STREAM); at.play(); at.write(byteData, 0, byteData.length); at.stop(); at.release(); } Could someone guide me please to play the AudioTrack code for bigger size wav files?

    Read the article

  • Algorithm to find the percentage of how much two texts are identical

    - by qster
    What algorithm would you suggest to identify how much from 0 to 1 (float) two texts are identical? Note that I don't mean similar (ie, they say the same thing but in a different way), I mean exact same words, but one of the two texts could have extra words or words slightly different or extra new lines and stuff like that. A good example of the algorithm I want is the one google uses to identify duplicate content in websites (X search results very similar to the ones shown have been omitted, click here to see them). The reason I need it is because my website has the ability for users to post comments; similar but different pages currently have their own comments, so many users ended up copy&pasting their comments on all the similar pages. Now I want to merge them (all similar pages will "share" the comments, and if you post it on page A it will appear on similar page B), and I would like to programatically erase all those copy&pasted comments from the same user. I have quite a few million comments but speed shouldn't be an issue since this is a one time thing that will run in the background. The programming language doesn't really matter (as long as it can interface to a MySQL database), but I was thinking of doing it in C++.

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Wordpress.. Is it a CMS?

    - by madcolor
    I recently was contacted by a client who simply wanted to increase their organic rankings in Google. My approach was to do the following: A) Take them off of their overpriced host and move them to a nicer, cheaper, more feature-rich hosting solution which included a simple Wordpress install. B) Apply a theme to WordPress which followed the look and feel of their existing website. C) Train my client on how to login to their copy of Wordpress and create/manage pages/posts. This took me very little time.. Most of the work in converting asp forms into php and tweaking a theme to fit their design. Now my client is able to create/manage as many pages or posts as they desire. I believe, for this purpose, Worpress was the easiest solution. Would you categorize Wordpress as a CMS? Similar: http://stackoverflow.com/questions/250880/is-there-any-cms-better-than-wordpress-or-should-i-roll-my-own http://stackoverflow.com/questions/1101818/wordpress-as-a-cms-option http://stackoverflow.com/questions/490899/which-cms-to-choose http://stackoverflow.com/questions/629203/is-it-advisable-to-use-wordpress-as-cms-closed http://stackoverflow.com/questions/1451326/is-wordpress-good-for-building-a-large-cms-site-with-many-pages-about-over-100-00 External: http://codex.wordpress.org/User%3ALastnode/Wordpress%5FCMS http://wordpress.org/development/2009/11/wordpress-wins-cms-award/

    Read the article

  • How to efficiently serve massive sitemaps in django

    - by mlissner
    I have a site with about 150K pages in its sitemap. I'm using the sitemap index generator to make the sitemaps, but really, I need a way of caching it, because building the 150 sitemaps of 1,000 links each is brutal on my server.[1] I COULD cache each of these sitemap pages with memcached, which is what I'm using elsewhere on the site...however, this is so many sitemaps that it would completely fill memcached....so that doesn't work. What I think I need is a way to use the database as the cache for these, and to only generate them when there are changes to them (which as a result of the sitemap index means only changing the latest couple of sitemap pages, since the rest are always the same.)[2] But, as near as I can tell, I can only use one cache backend with django. How can I have these sitemaps ready for when Google comes-a-crawlin' without killing my database or memcached? Any thoughts? [1] I've limited it to 1,000 links per sitemap page because generating the max, 50,000 links, just wasn't happening. [2] for example, if I have sitemap.xml?page=1, page=2...sitemap.xml?page=50, I only really need to change sitemap.xml?page=50 until it is full with 1,000 links, then I can it pretty much forever, and focus on page 51 until it's full, cache it forever, etc.

    Read the article

  • Automatic multi-page multi-column flowing text with QtWebkit (HTML/CSS/JS -> PDF)

    - by Peter Boughton
    I have some HTML documents that are converted to PDF, using software that renders using QtWebkit (not sure which version). Currently, the documents have specific tags to split into columns and pages - so whenever the wording changes, it is a manual time-consuming process to move these tags so that the columns and pages fit. Can anyone provide a way to have text auto-wrapped into the next column/page (as appropriate) when it reaches the bottom of the current container? Any HTML, CSS or JS supported by QtWebkit is ok (assuming it works in the PDF converter). (I have tested the webkit-column-* in CSS3 and it appears QtWebkit does not support this.) To make things more exciting, it also needs to: - put a header at the top of each page, with page X of Y numbering; - if an odd number of pages, add a blank page at the end (with no header); - have the ability to say "don't break inside this block" or "don't break after this header" I have put some quick example initial markup and target markup to help explain what I'm trying to do. (The actual documents are far more complicated than that, but I need a simple proof-of-concept before I attack the real ones.) Any suggestions? Update: I've got a partially working solution using Aaron's "filling up" suggestion - I'll post more details in a bit.

    Read the article

  • How to build a ~500 page Flash site

    - by philwilks
    I am about to embark on building a Flash site with approximately 500 pages. The site is an interactive learning type of system, with about 10 "chapters" each containing around 50 "pages". Each page has some sort of animation and interactivity, for example the user might have to decide whether a statement is true or false by clicking on one of two buttons, and then an appropriate response is displayed. The user can jump backwards and forwards between pages as they wish. As far as I know, these are some of my options... A) Build the entire site as a single Flash file with no external content. B) Build each of the 10 chapters as a separate Flash file, and then have a master Flash file which loads in the chapters. Each page would then be a separate movie clip within the chapter file. C) Build each indevidual page as a separate Flash file, and then have master Flash file which loads these in. At the moment I'm thinking that option B would be best, and I'd be very grateful for your thoughts on this! Of course, there are probably other options that I haven't thought of.

    Read the article

  • Boost::Mutex & Malloc

    - by M. Tibbits
    Hi all, I'm trying to use a faster memory allocator in C++. I can't use Hoard due to licensing / cost. I was using NEDMalloc in a single threaded setting and got excellent performance, but I'm wondering if I should switch to something else -- as I understand things, NEDMalloc is just a replacement for C-based malloc() & free(), not the C++-based new & delete operators (which I use extensively). The problem is that I now need to be thread-safe, so I'm trying to malloc an object which is reference counted (to prevent excess copying), but which also contains a mutex pointer. That way, if you're about to delete the last copy, you first need to lock the pointer, then free the object, and lastly unlock & free the mutex. However, using malloc to create a boost::mutex appears impossible because I can't initialize the private object as calling the constructor directly ist verboten. So I'm left with this odd situation, where I'm using new to allocate the lock and nedmalloc to allocate everything else. But when I allocate a large amount of memory, I run into allocation errors (which disappear when I switch to malloc instead of nedmalloc ~ but the performance is terrible). My guess is that this is due to fragmentation in the memory and an inability of nedmalloc and new to place nice side by side. There has to be a better solution. What would you suggest?

    Read the article

  • How do I create self-relationships in polymorphic inheritance in Elixir and Pylons?

    - by Turukawa
    I am new to programming and am following the example in the Pylons documentation on creating a Wiki. The database I want to link to the wiki was created with Elixir so I rewrote the Wiki database schema and have continued from there. In the wiki there is a requirement for a Navigation table which is inherited by Pages and Sections. A section can have many pages, while a page can only have one section. In addition, each sibling node can be chain-referenced to each other. So: Nav has "section" (OneToMany) and "before" (OneToOne - to reference preceeding node) Page has "section" (ManyToOne - many pages in one section) and inherits "before" Section inherits all from Nav The code I've written looks like this: class Nav(Entity): using_options(inheritance='multi') name = Field(Unicode(30), default=u'Untitled Node') path = Field(Unicode(255), default=u'') section = OneToMany('Page', inverse='section') after = OneToOne('Nav', inverse='before') before = OneToMany('Nav', inverse='after') class Page(Nav): using_options(inheritance='multi') content = Field(UnicodeText, nullable=False) posted = Field(DateTime, default=now()) title = Field(Unicode(255), default=u'Untitled Page') heading = Field(Unicode(255)) tags = ManyToMany('Tag') comments = OneToMany('Comment') section = ManyToOne('Nav', inverse='section') class Section(Nav): using_options(inheritance='multi') Errors received on this: sqlalchemy.exc.OperationalError: (OperationalError) table nav has no column named aftr_id u'INSERT INTO nav (name, path, aftr_id, row_type) VALUES (?, ?, ?, ?)' I've also tried: before = ManyToMany('Nav', inverse='before') on Nav in the hopes this might break the problem, but also not. The original SQLAlchemy code from the tutorial for these declarations is as follows: nav_table = schema.Table('nav', meta.metadata, schema.Column('id', types.Integer(), schema.Sequence('nav_id_seq', optional=True), primary_key=True), schema.Column('name', types.Unicode(255), default=u'Untitled Node'), schema.Column('path', types.Unicode(255), default=u''), schema.Column('section', types.Integer(), schema.ForeignKey('nav.id')), schema.Column('before', types.Integer(), default=None), schema.Column('type', types.String(30), nullable=False) ) page_table = schema.Table('page', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), schema.Column('content', types.Text(), nullable=False), schema.Column('posted', types.DateTime(), default=now), schema.Column('title', types.Unicode(255), default=u'Untitled Page'), schema.Column('heading', types.Unicode(255)), ) section_table = sa.Table('section', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), ) orm.mapper(Nav, nav_table, polymorphic_on=nav_table.c.type, polymorphic_identity='nav') orm.mapper(Section, section_table, inherits=Nav, polymorphic_identity='section') orm.mapper(Page, page_table, inherits=Nav, polymorphic_identity='page', properties={ 'comments':orm.relation(Comment, backref='page', cascade='all'), 'tags':orm.relation(Tag, secondary=pagetag_table) }) Any help is much appreciated.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >