Search Results

Search found 23236 results on 930 pages for 'content strategy'.

Page 51/930 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Strategy for unsubscribing event handlers

    - by stiank81
    In my WPF application I have a View that is given a ViewModel, and when given this View it adds event handlers to the ViewModel's PropertyChanged event. When some action occur in the GUI I remove the View and add another View to the holding container - where this new one is bound to the same ViewModel. After this has happened the old View still keeps handling PropertyChanged events in the ViewModel. I'm assuming this happens because the View hasn't been collected by the Garbage Collector yet, and therefore is alive? Well - I need it to stop. My assumption is that I need to manually detach the event handler from the ViewModel? Is there a best-practice on how to handle this?

    Read the article

  • VS2010 and CSS: What is the best strategy to individually position form controls

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • Drupal: Template Files, Modules and Content Types for Advanced Theme

    - by theandym
    Intro I am in the process of trying to convert my first HTML/CSS design into a theme for Drupal. I have used ModX for quite a few designs and appreciate the ability to create different page templates and custom variables to be assigned to those templates. However I seem to be having some issues making the transition. The site I am working on theming in Drupal is for a real estate agent. Each page/section will have a different set of content associated with it and will need to display only that content. For example, there will be a page for current listings, each of which will be formatted by a custom content type. However, when I call the content on the home page (or on other pages) I do not want to see this listing data. Layout The layout of the site and the regions associated with each page/section is as follows: Home Spotlight Featured 1 Featured 2 About Spotlight Bios - Profiles of each agent (each will be a node with name, contact info, pic, etc) listed on the page; multiple nodes listed Sidebar Listings Spotlight Listings - Profiles of properties (each will be a node with locations, basic info, pic, etc) listed on the page; multiple nodes listed Sidebar Services Spotlight Content - general paragraph text area Sidebar News/Blog News/Blog Items - List of stories with summaries and links to full article Sidebar Each page/section will use the same header and footer. Issue I have done some reading on Drupal, custom content types (and CCK), Views, and Pathauto. However I have not been able to get a clear picture of how to put it all together to accomplish what I am attempting. What I really would like to know is which modules to use, how best to use them, which elements I need to use where, and what template files I should be using to theme the elements I need to use. Any help or reference to useful resources would be much appreciated.

    Read the article

  • Calling sp and Performance strategy.

    - by Costa
    Hi I find my self in a situation where I have to choose between either creating a new sp in database and create the middle layer code. so loose some precious development time. also the procedure is likely to contain some joins. Or use two existing sp(s), the problem of this approach is that I am doing two round trips to database. which can be poor performance especially if I have database in another server. Which approach you will go?, and why? thanks

    Read the article

  • Using HTTP Vary header to decide on a strategy to process a request

    - by Jacques René Mesrine
    I have a specific REST endpoint that creates a topic in a forum; but I want to apply different strategies when processing the request. e.g. If client A makes the call, perform moderation. if client B makes the call, do something else. The easiest would be to add a query param for differentiation: POST /resource?from=xyz Another brilliant idea is to use the Vary HTTP header. POST /resource Vary: xyz Any problems with this approach ?

    Read the article

  • Compressing xls content with apache deflate module

    - by Clinton Bosch
    I am trying to compress an excel spreadsheet being sent from my application using apache deflate module. I have added the following line to the my sites-enabled file: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/excel But is seems to make the response data bigger??? Using firebug, without the module I downloaded the xls spreadsheet from the application and it downloaded 100Kb of data, the file size once on the filesystem was also 100Kb as expected. Once I enabled the deflate module as described above and repeated the process, the amount of data downloaded was 295Kb?? but the file was still only 100Kb once save on the filesystem. As an experiment I manually gzipped the saved xls file and it compressed to 20Kb. What am I doing wrong here? Using deflate (Firebug output): 200 OK xxxxxxx.co.za 293 KB 4.43s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:01:43 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Vary Accept-Encoding Content-Encoding gzip Content-Type application/excel Without deflate (Firebug output): 200 OK xxxxxxxx.co.za 100 KB 3.46s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:06:00 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Content-Length 102912 Content-Type application/excel

    Read the article

  • XSL-FO: Static content AND Flow content in Region-Body: Possible?

    - by Peterdk
    I have the following problem: I need to use XSLFO to generate a 2-column multipage document. Problem is: I need to have a vertical line between the 2 columns. Since XSLFO does not seem to specify a option for creating such a divider, I need to manually put it there. I was thinking of using a static rotated blockcontainer with a leader in it. However, it looks like it's not possible to use static-content on the same region as where the flow content comes. <fo:layout-master-set> <fo:simple-page-master page-width="170mm" page-height="222mm" master-name="page" > <fo:region-body region-name="xsl-region-body" margin-top="2mm" margin-bottom="2mm" margin-left="10mm" margin-right="10mm" column-count="2" column-gap="5mm" /> </fo:simple-page-master> </fo:layout-master-set> <fo:page-sequence master-reference="page"> <fo:static-content flow-name="xsl-region-body" ><!-- This gives a error --> <fo:block>test</fo:block> </fo:static-content> <fo:flow flow-name="xsl-region-body"> <xsl:apply-templates/> </fo:flow> </fo:page-sequence> Results in (XEP): [error] Duplicate identifier: flow-name="xsl-region-body". Property 'flow-name' should be unique within 'fo:page-sequence'. Are there any methods to place static content on the main region when also flow content is placed there? Or: Is there a way to define the divider that divides a 2-column layout?

    Read the article

  • WPF deployment strategy dilemma.clickonce(limited customization)+autoupdate vs installer(unlimited c

    - by black sensei
    Hello Experts!!! I've been facing a deployment problem.I've built a WPF application with visual studio 2008 and created an installer(msi) which works fine.But then it's pain to add automatic update to it. i've seen this article at windowsclient.net but it seems to be pretty old but could have been the perfect thing for me.Then i looked at the .Net Application updater block v2.0 which uses enterprise library june 2005 and for some reason it's not installing on my machine. I thought i will need to use a more recent Enterprise library so i installed and compiled Enterprise 4.1(october 2008) but nothing better happened.To i decided to give a try to CLickonce deployment.After struggling with it, it was almost perfect.I realized that when i was testing the updates provided by the clickonce on my machine which is XP i didn't notice the need of having sqlite dll in the GAC. surely it was already there.I noticed it when i moved to vista that there is a problem.After checking the net i know it's impossible to add a dll to the Global Assembly Cache. Now i'm stuck, i think i've hit a wall.Can any one share some of his experience? I'm willing to try the updater block if i can get help. Thanks for reading this!!

    Read the article

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

  • SSIS Script Component Testing Strategy

    - by Paul Kohler
    This question is in respect to the script component specifically. I am aware of ssisUnit etc… With simple SSIS Scripts Components, it’s sufficient to let basic testing flesh out issues, however I am working with a script that has grown in complexity over time. To better test the functionality I am considering abstracting the script logic into a DLL that gets deployed with the package, and then use the custom component in the script. The advantage is that the function will be more testable etc but it’s one more deployment artefact that needs to be managed. My question is, does anyone know of a better way to test such an SSIS script in a more isolated manner than to run the whole package and examine the output?

    Read the article

  • curl not returning content length header

    - by Michael P. Shipley
    Trying to get image file size using curl but content length header is not returned: $url ="http://www.collegefashion.net/wp-content/plugins/feed-comments-number/image.php?1263"; $fp = curl_init(); curl_setopt($fp, CURLOPT_NOBODY, true); curl_setopt($fp, CURLOPT_RETURNTRANSFER, 1); curl_setopt($fp, CURLOPT_FAILONERROR,1); curl_setopt($fp, CURLOPT_REFERER,''); curl_setopt($fp, CURLOPT_URL, $url); curl_setopt($fp, CURLOPT_HEADER,1); curl_setopt($fp, CURLOPT_USERAGENT,'Mozilla/5.0'); $body = curl_exec($fp); var_dump($body): HTTP/1.1 200 OK Date: Sun, 02 May 2010 02:50:20 GMT Server: Apache/2.0.63 (CentOS) X-Powered-By: W3 Total Cache/0.8.5.2 X-Pingback: http://www.collegefashion.net/xmlrpc.php Cache-Control: no-cache, must-revalidate Expires: Sat, 26 Jul 1997 05:00:00 GMT Content-Type: image/png It works via ssh though: curl -i http://www.collegefashion.net/wp-content/plugins/feed-comments-number/image.php?1263 HTTP/1.1 200 OK Date: Sun, 02 May 2010 03:38:43 GMT Server: Apache/2.0.63 (CentOS) X-Powered-By: W3 Total Cache/0.8.5.2 X-Pingback: http://www.collegefashion.net/xmlrpc.php Cache-Control: no-cache, must-revalidate Expires: Sat, 26 Jul 1997 05:00:00 GMT Content-Length: 347 Content-Type: image/png

    Read the article

  • Wordpress page content not showing up

    - by Arlen Beiler
    I am using the latest nightly build WordPress 3.0-beta2-14729 and the Theme TwentyTen. The content is not showing up where I indicated below. But when I go to the edit window, it is all there. Does anyone know what the problem is? The content in the edit box is ~400KB of text. <div id="post-125" class="post-125 page type-page hentry"> <h1 class="entry-title">Post title</h1> <div class="entry-content"> // There should be something here <span class="edit-link"><a class="post-edit-link" href="http://abp.bhc.com/wp-books-beta/aig/wp-admin/post.php?post=125&amp;action=edit" title="Edit Post">Edit</a></span> </div><!-- .entry-content --> </div><!-- #post-125 --> the php code is: <div class="entry-content"> <?php the_content( __( 'Continue reading <span class="meta-nav">&rarr;</span>', 'twentyten' ) ); ?> <?php wp_link_pages( array( 'before' => '<div class="page-link">' . __( 'Pages:', 'twentyten' ), 'after' => '</div>' ) ); ?> </div><!-- .entry-content -->

    Read the article

  • Strategy for developing namespaced and non-namespaced versions of same PHP code

    - by porneL
    I'm maintaining library written for PHP 5.2 and I'd like to create PHP 5.3-namespaced version of it. However, I'd also keep non-namespaced version up to date until PHP 5.3 becomes so old, that even Debian stable ships it ;) I've got rather clean code, about 80 classes following Project_Directory_Filename naming scheme (I'd change them to \Project\Directory\Filename of course) and only few functions and constants (also prefixed with project name). Question is: what's the best way to develop namespaced and non-namespaced versions in parallel? Should I just create fork in repository and keep merging changes between branches? Are there cases where backslash-sprinkled code becomes hard to merge? Should I write script that converts 5.2 version to 5.3 or vice-versa? Should I use PHP tokenizer? sed? C preprocessor? Is there a better way to use namespaces where available and keep backwards compatibility with older PHP?

    Read the article

  • HFT strategy coding on hardware

    - by bsobaid
    Hardware accelaration and embedded programming has mostly been used so far to parse datafeed and/or to route orders to exchange. Have there been attempts to write simpler HFT strategies such as equity market-making in hardware? Have they been successful? Which companies are doing this and what kind of programming model is used?

    Read the article

  • Using protocol buffers for a comprehensive data strategy for Windows Mobile devices

    - by Steve
    I have started reading some of the posts related to protocol buffers. The serialization method seems very appropriate for the transfer of data to and from web servers. Has anyone considered using a method like this to save and retrieve data on the mobile device itself? (i.e. a replacement for a traditional database / orm layer) Where would the data be persisted? How would the data be queried? Would it make sense to store the data in a traditional database (SqlCE or SqlLite) with a few "searchable" columns and then one column for the serialized data? Thoughts? Am I out on a limb here? Thank you!

    Read the article

  • Strategy in storing ad-hoc numbers/constants?

    - by Jiho Han
    I have a need to store a number of ad-hoc figures and constants for calculation. These numbers change periodically but they are different type of values. One might be a balance, a money amount, another might be an interest rate, and yet another might be a ratio of some kind. These numbers are then used in a calculation that involve other more structured figures. I'm not certain what the best way to store these in a relational DB is - that's the choice of storage for the app. One way, I've done before, is to create a very generic table that stores the values as text. I might store the data type along with it but the consumer knows what type it is so, in situations I didn't even need to store the data type. This kind of works fine but I am not very fond of the solution. Should I break down each of the numbers into specific categories and create tables that way? For example, create Rates table, and Balances table, etc.?

    Read the article

  • Partial Git deployment strategy?

    - by MatW
    I need to setup a Kohana dev environment that allows me to make full use of shared module / system classes across separate applications. Each application typically belonging to a different client. I use Git for source control, but am struggling to come up with a clean deployment method that will allow me to pull only those parts of the dev environment specific to a client / app down into that client's production environment (assuming that the client's production environment will have Git installed). Dev enviroment: - kohana - applications - clientapp1 - clientapp2 - modules - public_html - clientapp1 - clientapp2 - system - 3.0.1 - 3.0.5 Client 1's production environment: - / - applications - clientapp1 - modules - public_html - client_app1 - system - 3.0.5 Naturally, I want to have total control over each client "sub repo" as if it were an independent repo (in terms of gitignore, etc). I have seen topics that cover Git's sparse checkout feature, but it seems like it may cause a few problems down the line from a maintenance point of view, and I don't like the idea of the entire repo's metadata existing in client's production environment repo. As you can probably tell, I'm not exactly a Git poweruser, so any suggestions / wisdom are very welcome!

    Read the article

  • Browser loading strategy, <head>...<body>

    - by Bin Chen
    I am inspecting some interesting behaviors of browser, I don't know it's in standard or not. If I put everythin inside <head></head>, the browser will only begin to render the page after all the resouces in head is retrieved. So I am thinking that put as little as possible things into head is one of the important website optimization techniques, is it right? My question is: If I put script/css in body or other parts of the html, how can I know that script has been loaded successfully so that I will not be calling a undefined function?

    Read the article

  • Generic version control strategy for select table data within a heavily normalized database

    - by leppie
    Hi Sorry for the long winded title, but the requirement/problem is rather specific. With reference to the following sample (but very simplified) structure (in psuedo SQL), I hope to explain it a bit better. TABLE StructureName { Id GUID PK, Name varchar(50) NOT NULL } TABLE Structure { Id GUID PK, ParentId GUID (FK to Structure), NameId GUID (FK to StructureName) NOT NULL } TABLE Something { Id GUID PK, RootStructureId GUID (FK to Structure) NOT NULL } As one can see, Structure is a simple tree structure (not worried about ordering of children for the problem). StructureName is a simplification of a translation system. Finally 'Something' is simply something referencing the tree's root structure. This is just one of many tables that need to be versioned, but this one serves as a good example for most cases. There is a requirement to version to any changes to the name and/or the tree 'layout' of the Structure table. Previous versions should always be available. There seems to be a few possibilities to tackle this issue, like copying the entire structure, but most approaches causes one to 'loose' referential integrity. Example if one followed this approach, one would have to make a duplicate of the 'Something' record, given that the root structure will be a new record, and have a new ID. Other avenues of possible solutions are looking into how Wiki's handle this or go a lot further and look how proper version control systems work. Currently, I feel a bit clueless how to proceed on this in a generic way. Any ideas will be greatly appreciated. Thanks leppie

    Read the article

  • Strategy for developing a multi function asp.net web application

    - by user247023
    I'm about to start a new project and want some advice on how to implement. I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out. If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules. to make things more difficult, there is some shared functionality like common users, roles and security. Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional. Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame. I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched' Any better alternatives?

    Read the article

  • Mercurial merge strategy per file type

    - by dls
    All: I want to use kdiff to merge all files with a certain suffix (say *.c, *.h) and I want to do two things (turn off premerge and use internal:other) for all files with another suffix (say *.mdl). The purpose of this is to allow me to employ a type of 'clobber merge' for a specific file type (ie: un-mergable files like configurations, auto-generated C, models, etc..) In my .hgrc I've tried: [merge-tools] kdiff3= clobbermerge=internal:other clobbermerge.premerge = False [merge-patterns] **.c = kdiff3 **.h = kdiff3 **.mdl = clobbermerge but it still triggers kdiff3 for all files. Thoughts? An extension of this would be to perform a 'clobber merge' on a directory - but once the syntax is clear for a file suffix, the dir should be easy.

    Read the article

  • What database strategy to choose for a large web application

    - by Snoopy
    I have to rewrite a large database application, running on 32 servers. The hardware is up to date, each machine has two quad core Xeon and 32 GByte RAM. The database is multi-tenant, each customer has his own file, around 5 to 10 GByte each. I run around 50 databases on this hardware. The app is open to the web, so I have no control on the load. There are no really complex queries, so SQL is not required if there is a better solution. The databases get updated via FTP every day at midnight. The database is read-only. C# is my favourite language and I want to use ASP.NET MVC. I thought about the following options: Use two big SQL servers running SQL Server 2012 to serve the 32 servers with data. On the 32 servers running IIS hosting providing REST services. Denormalize the database and use Redis on each webserver. Use booksleeve as a Redis client. Use a combination of SQL Server and Redis Use SQL Server 2012 together with Hadoop Use Hadoop without SQL Server What is the best way for a read-only database, to get the best performance without loosing maintainability? Does Map-Reduce make sense at all in such a scenario? The reason for the rewrite is, the old app written in C++ with ISAM technology is too slow, the interfaces are old fashioned and not nice to use from an website, especially when using ajax. The app uses a relational datamodel with many tables, but it is possible to write one accerlerator table where all queries can be performed on, and all other information from the other tables are possible by a simple key lookup.

    Read the article

  • evaluation strategy examples

    - by Boontz
    Assuming the language supports these evaluation strategies, what would be the result for call by reference, call by name, and call by value? void swap(int a; int b) { int temp; temp = a; a = b; b = temp; } int i = 3; int A[5]; A[3] = 4; swap (i, A[3]);

    Read the article

  • Dynamically resizing navigation div to main content

    - by theZod
    Greetings and Hello I am trying to put together a wordpress site, now because the content in the main div is going to be a different height with every page I need the navigation sidebar to stretch to the same height. So with a little javascript tom-foolery I can get the sidebar to be the same height with the following code function adjust(){ hgt=document.getElementById('content').offsetHeight; document.getElementById('sidebar').style.height=hgt+'px'; } window.onload=adjust; window.onresize=adjust; Now that's all good for a long page but if the content is smaller then the sidebar stuff gets all messy. So I have tried an if statement like so function adjust() { if (document.getElementById('content').style.height < document.getElementById('sidebar').style.height){ hgt=document.getElementById('content').offsetHeight; document.getElementById('sidebar').style.height=hgt+'px'; else hgt=document.getElementById('sidebar').offsetHeight; document.getElementById('content').style.height=hgt+'px'; } } window.onload=adjust; window.onresize=adjust; But that just doesn't do anything, so any ideas whats going on?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >