Search Results

Search found 249 results on 10 pages for 'tpl'.

Page 2/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Include HTML file into Smarty .tpl file.

    - by Leonarth
    {if $loggedin} {literal} {include file="allhead.html"} {/literal} {else} {literal} {include file="allhead1.html"} {/literal} {/if} How do I include the code contained into an HTML file in a smarty .tpl file? I've tried different solutions on various forums, but none work.

    Read the article

  • How to handle all unhandled exceptions when using Task Parallel Library?

    - by Buu Nguyen
    I'm using the TPL (Task Parallel Library) in .NET 4.0. I want to be able to centralize the handling logic of all unhandled exceptions by using the Thread.GetDomain().UnhandledException event. However, in my application, the event is never fired for threads started with TPL code, e.g. Task.Factory.StartNew(...). The event is indeed fired if I use something like new Thread(threadStart).Start(). This MSDN article suggests to use Task#Wait() to catch the AggregateException when working with TPL, but that is not I want because it is not "centralized" enough a mechanism. Does anyone experience same problem at all or is it just me? Do you have any solution for this?

    Read the article

  • making clean page via page.tpl.php

    - by user360051
    I have a Drupal module creating a page via hook_menu(). I am trying to make it so the page has no extraneous html output, only what I want. You can view the page here, http://www.thomashansen.me/chat/thomas. If you look at the source, you can see a strange script tag at the end. My page-chat.tpl.php looks like this, <?php // $Id$ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language ?>" lang="<?php print $language->language ?>" dir="<?php print $language->dir ?>"> <head> </head> <body> <?php print $content; ?> </body> </html> Where is that script tag coming from? and how do I get rid of it? If you need more information just ask.

    Read the article

  • Does my TPL partitioner cause a deadlock?

    - by Scott Chamberlain
    I am starting to write my first parallel applications. This partitioner will enumerate over a IDataReader pulling chunkSize records at a time from the data-source. protected class DataSourcePartitioner<object[]> : System.Collections.Concurrent.Partitioner<object[]> { private readonly System.Data.IDataReader _Input; private readonly int _ChunkSize; public DataSourcePartitioner(System.Data.IDataReader input, int chunkSize = 10000) : base() { if (chunkSize < 1) throw new ArgumentOutOfRangeException("chunkSize"); _Input = input; _ChunkSize = chunkSize; } public override bool SupportsDynamicPartitions { get { return true; } } public override IList<IEnumerator<object[]>> GetPartitions(int partitionCount) { var dynamicPartitions = GetDynamicPartitions(); var partitions = new IEnumerator<object[]>[partitionCount]; for (int i = 0; i < partitionCount; i++) { partitions[i] = dynamicPartitions.GetEnumerator(); } return partitions; } public override IEnumerable<object[]> GetDynamicPartitions() { return new ListDynamicPartitions(_Input, _ChunkSize); } private class ListDynamicPartitions : IEnumerable<object[]> { private System.Data.IDataReader _Input; int _ChunkSize; private object _ChunkLock = new object(); public ListDynamicPartitions(System.Data.IDataReader input, int chunkSize) { _Input = input; _ChunkSize = chunkSize; } public IEnumerator<object[]> GetEnumerator() { while (true) { List<object[]> chunk = new List<object[]>(_ChunkSize); lock(_Input) { for (int i = 0; i < _ChunkSize; ++i) { if (!_Input.Read()) break; var values = new object[_Input.FieldCount]; _Input.GetValues(values); chunk.Add(values); } if (chunk.Count == 0) yield break; } var chunkEnumerator = chunk.GetEnumerator(); lock(_ChunkLock) //Will this cause a deadlock? { while (chunkEnumerator.MoveNext()) { yield return chunkEnumerator.Current; } } } } IEnumerator IEnumerable.GetEnumerator() { return ((IEnumerable<object[]>)this).GetEnumerator(); } } } I wanted IEnumerable object it passed back to be thread safe (the .Net example was so I am assuming PLINQ and TPL could need it) will the lock on _ChunkLock near the bottom help provide thread safety or will it cause a deadlock? From the documentation I could not tell if the lock would be released on the yeld return. Also if there is built in functionality to .net that will do what I am trying to do I would much rather use that. And if you find any other problems with the code I would appreciate it.

    Read the article

  • Overflow exception while performing parallel factorization using the .NET Task Parallel Library (TPL

    - by Aviad P.
    Hello, I'm trying to write a not so smart factorization program and trying to do it in parallel using TPL. However, after about 15 minutes of running on a core 2 duo machine, I am getting an aggregate exception with an overflow exception inside it. All the entries in the stack trace are part of the .NET framework, the overflow does not come from my code. Any help would be appreciated in figuring out why this happens. Here's the commented code, hopefully it's simple enough to understand: class Program { static List<Tuple<BigInteger, int>> factors = new List<Tuple<BigInteger, int>>(); static void Main(string[] args) { BigInteger theNumber = BigInteger.Parse( "653872562986528347561038675107510176501827650178351386656875178" + "568165317809518359617865178659815012571026531984659218451608845" + "719856107834513527"); Stopwatch sw = new Stopwatch(); bool isComposite = false; sw.Start(); do { /* Print out the number we are currently working on. */ Console.WriteLine(theNumber); /* Find a factor, stop when at least one is found (using the Any operator). */ isComposite = Range(theNumber) .AsParallel() .Any(x => CheckAndStoreFactor(theNumber, x)); /* Of the factors found, take the one with the lowest base. */ var factor = factors.OrderBy(x => x.Item1).First(); Console.WriteLine(factor); /* Divide the number by the factor. */ theNumber = BigInteger.Divide( theNumber, BigInteger.Pow(factor.Item1, factor.Item2)); /* Clear the discovered factors cache, and keep looking. */ factors.Clear(); } while (isComposite); sw.Stop(); Console.WriteLine(isComposite + " " + sw.Elapsed); } static IEnumerable<BigInteger> Range(BigInteger squareOfTarget) { BigInteger two = BigInteger.Parse("2"); BigInteger element = BigInteger.Parse("3"); while (element * element < squareOfTarget) { yield return element; element = BigInteger.Add(element, two); } } static bool CheckAndStoreFactor(BigInteger candidate, BigInteger factor) { BigInteger remainder, dividend = candidate; int exponent = 0; do { dividend = BigInteger.DivRem(dividend, factor, out remainder); if (remainder.IsZero) { exponent++; } } while (remainder.IsZero); if (exponent > 0) { lock (factors) { factors.Add(Tuple.Create(factor, exponent)); } } return exponent > 0; } } Here's the exception thrown: Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.OverflowException: Arithmetic operation resulted in an overflow. at System.Linq.Parallel.PartitionedDataSource`1.ContiguousChunkLazyEnumerator.MoveNext(T& currentElement, Int32& currentKey) at System.Linq.Parallel.AnyAllSearchOperator`1.AnyAllSearchOperatorEnumerator`1.MoveNext(Boolean& currentElement, Int32& currentKey) at System.Linq.Parallel.StopAndGoSpoolingTask`2.SpoolingWork() at System.Linq.Parallel.SpoolingTaskBase.Work() at System.Linq.Parallel.QueryTask.BaseWork(Object unused) at System.Linq.Parallel.QueryTask.<.cctor>b__0(Object o) at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.Execute() --- End of inner exception stack trace --- at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) at System.Linq.Parallel.SpoolingTask.SpoolStopAndGo[TInputOutput,TIgnoreKey](QueryTaskGroupState groupState, PartitionedStream`2 partitions, SynchronousChannel`1[] channels, TaskScheduler taskScheduler) at System.Linq.Parallel.DefaultMergeHelper`2.System.Linq.Parallel.IMergeHelper<TInputOutput>.Execute() at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) at System.Linq.Parallel.AnyAllSearchOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, BooleanpreferStriping, QuerySettings settings) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() at System.Linq.Parallel.AnyAllSearchOperator`1.Aggregate() at System.Linq.ParallelEnumerable.Any[TSource](ParallelQuery`1 source, Func`2 predicate) at PFact.Program.Main(String[] args) in d:\myprojects\PFact\PFact\Program.cs:line 34 Any help would be appreciated. Thanks!

    Read the article

  • Suggestions for doing async I/O with Task Parallel Library

    - by anelson
    I have some high performance file transfer code which I wrote in C# using the Async Programming Model (APM) idiom (eg, BeginRead/EndRead). This code reads a file from a local disk and writes it to a socket. For best performance on modern hardware, it's important to keep more than one outstanding I/O operation in flight whenever possible. Thus, I post several BeginRead operations on the file, then when one completes, I call a BeginSend on the socket, and when that completes I do another BeginRead on the file. The details are a bit more complicated than that but at the high level that's the idea. I've got the APM-based code working, but it's very hard to follow and probably has subtle concurrency bugs. I'd love to use TPL for this instead. I figured Task.Factory.FromAsync would just about do it, but there's a catch. All of the I/O samples I've seen (most particularly the StreamExtensions class in the Parallel Extensions Extras) assume one read followed by one write. This won't perform the way I need. I can't use something simple like Parallel.ForEach or the Extras extension Task.Factory.Iterate because the async I/O tasks don't spend much time on a worker thread, so Parallel just starts up another task, resulting in potentially dozens or hundreds of pending I/O operations; way too much! You can work around that by Waiting on your tasks, but that causes creation of an event handle (a kernel object), and a blocking wait on a task wait handle, which ties up a worker thread. My APM-based implementation avoids both of those things. I've been playing around with different ways to keep multiple read/write operations in flight, and I've managed to do so using continuations that call a method that creates another task, but it feels awkward, and definitely doesn't feel like idiomatic TPL. Has anyone else grappled with an issue like this with the TPL? Any suggestions?

    Read the article

  • How does Task Parallel Library scale on a terminal server or in a web application?

    - by Lasse V. Karlsen
    I understand that the TPL uses work-stealing queues for its tasks when I execute things like Parallel.For and similar constructs. If I understand this correctly, the construct will spin up a number of tasks, where each will start processing items. If one of the tasks complete their allotted items, it will start stealing items from the other tasks which hasn't yet completed theirs. This solves the problem where items 1-100 are cheap to process and items 101-200 are costly, and one of the two tasks would just sit idle until the other completed. (I know this is a simplified exaplanation.) However, how will this scale on a terminal server or in a web application (assuming we use TPL in code that would run in the web app)? Can we risk saturating the CPUs with tasks just because there are N instances of our application running side by side? Is there any information on this topic that I should read? I've yet to find anything in particular, but that doesn't mean there is none.

    Read the article

  • How can I assign a name to a task in TPL

    - by mehrandvd
    I'm going to use lots of tasks running on my application. Each bunch of tasks is running for some reason. I would like to name these tasks so when I watch the Parallel Tasks window, I could recognize them easily. With another point of view, consider I'm using tasks at the framework level to populate a list. A developer that use my framework is also using tasks for her job. If she looks at the Parallel Tasks Window she will find some tasks having no idea about. I want to name tasks so she can distinguish the framework tasks from her tasks. It would be very convenient if there was such API: var task = new Task(action, "Growth calculation task") or maybe: var task = Task.Factory.StartNew(action, "Populating the datagrid") or even while working with Parallel.ForEach Parallel.ForEach(list, action, "Salary Calculation Task" Is it possible to name a task? Is it possible to give ???Parallel.ForEach a naming structure (maybe using a lambda) so it creates tasks with that naming? Is there such API somewhere that I'm missing? I've also tried to use an inherited task to override it's ToString(). But unfortunately the Parallel Tasks window doesn't use ToString()! class NamedTask : Task { private string TaskName { get; set; } public NamedTask(Action action, string taskName):base(action) { TaskName = taskName; } public override string ToString() { return TaskName; } }

    Read the article

  • TPL - Using static method vs struct method

    - by Sunit
    I have about 1500 files on a share for which I need to collect FileVersionInfo string. So I created a Static method in my Gateway like this: private static string GetVersionInfo(string filepath) { FileVersionInfo verInfo = FileVersionInfo.GetVersionInfo(filepath); return string.Format("{0}.{1}.{2}.{3}", verInfo.ProductMajorPart, verInfo.ProductMinorPart, verInfo.ProductBuildPart, verInfo.ProductPrivatePart).Trim(); } And then used FileAndVersion struct in a PLINQ call with DegreeOfParallelism as this is I/O related resultList = dllFilesRows.AsParallel().WithDegreeOfParallelism(20) .Select(r => { var symbolPath = r.Filename; return new FilenameAndVersion{Filename=symbolPath, Version=GetVersionInfo(symbolPath)}; }) .ToArray(); Later I modified the Struct, FileAndVersion as: private struct FilenameAndVersion { private string _version, _filename; public string Version { get { return _version; } } public string Filename { get { return _filename; } } public void SetVersion() { FileVersionInfo verInfo = FileVersionInfo.GetVersionInfo(this.Filename); this._version = string.Format("{0}.{1}.{2}.{3}", verInfo.ProductMajorPart, verInfo.ProductMinorPart, verInfo.ProductBuildPart, verInfo.ProductPrivatePart).Trim(); } public FilenameAndVersion(string filename, string version) { this._filename = filename; this._version = string.Empty; SetVersion(); } } And used it: resultList = dllFilesRows.AsParallel().WithDegreeOfParallelism(20) .Select(r => { var symbolPath = r.Filename; return new FilenameAndVersion(symbolPath, String.Empty); }) .ToArray(); The question is, is this going to help me in anyway and is a good pattern to use ? Sunit

    Read the article

  • How (and if) to write a single-consumer queue using the task parallel library?

    - by Eric
    I've heard a bunch of podcasts recently about the TPL in .NET 4.0. Most of them describe background activities like downloading images or doing a computation, using tasks so that the work doesn't interfere with a GUI thread. Most of the code I work on has more of a multiple-producer / single-consumer flavor, where work items from multiple sources must be queued and then processed in order. One example would be logging, where log lines from multiple threads are sequentialized into a single queue for eventual writing to a file or database. All the records from any single source must remain in order, and records from the same moment in time should be "close" to each other in the eventual output. So multiple threads or tasks or whatever are all invoking a queuer: lock( _queue ) // or use a lock-free queue! { _queue.enqueue( some_work ); _queueSemaphore.Release(); } And a dedicated worker thread processes the queue: while( _queueSemaphore.WaitOne() ) { lock( _queue ) { some_work = _queue.dequeue(); } deal_with( some_work ); } It's always seemed reasonable to dedicate a worker thread for the consumer side of these tasks. Should I write future programs using some construct from the TPL instead? Which one? Why?

    Read the article

  • drupal : How to set separate editing template files for each nodes ?

    - by Thomas John
    Hello guys I have a 3 page drupal(6.20) site, each page has its own template like page-node-1.tpl.php, page-node-2.tpl.php, page-node-3.tpl.php, I would like to set separate templates when editing each node, I tried page-node-1-edit.tpl.php but its not working, but page-node-edit.tpl is working, but its common to all nodes, I need separate editing templates for each node like page-node-1-edit.tpl.php and page-node-2-edit.tpl.php Thanks a lot for your time

    Read the article

  • Does the number of busy worker threads in the CLR ThreadPool affect performance of I/O threads?

    - by andrej351
    We have a Windows Service which hosts a number of WCF services and, in an unrelated part of the app, makes extensive use of the TPL Task class to asynchronously do relatively short bits of work. It is my understanding that WCF uses managed I/O threads from the ThreadPool to execute requests. I noticed that after deploying a feature which significantly raised the applications use of Tasks, and as such the use of ThreadPool worker threads as well, performance of a couple of web services has become very slow. We're talking minutes instead of less than a second. The number of Tasks actually trying to run at any one time can range between 20 and 1000, which makes me think that any new (last in) work needing some CPU time could be forced to wait for quite some time. Does the (in my case extremely large) number of busy ThreadPool worker threads affect the ThreadPool's managed I/O threads? Or could these two be connected in any way? Thanks!

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Built in background-scheduling system in .NET?

    - by Lasse V. Karlsen
    I ask though I doubt there is any such system. Basically I need to schedule tasks to execute at some point in the future (usually no more than a few seconds or possibly minutes from now), and have some way of cancelling that request unless too late. Ie. code that would look like this: var x = Scheduler.Schedule(() => SomethingSomething(), TimeSpan.FromSeconds(5)); ... x.Dispose(); // cancels the request Is there any such system in .NET? Is there anything in TPL that can help me? I need to run such future-actions from various instances in a system here, and would rather avoid each such class instance to have its own thread and deal with this. Also note that I don't want this (or similar, for instance through Tasks): new Thread(new ThreadStart(() => { Thread.Sleep(5000); SomethingSomething(); })).Start(); There will potentially be a few such tasks to execute, they don't need to be executed in any particular order, except for close to their deadline, and it isn't vital that they have anything like a realtime performance concept. I just want to avoid spinning up a separate thread for each such action.

    Read the article

  • Dataflow Pipeline holding on to memory

    - by Jesse Carter
    I've created a Dataflow pipeline consisting of 4 blocks (which includes one optional block) which is responsible for receiving a query object from my application across HTTP and retrieving information from a database, doing an optional transform on that data, and then writing the information back in the HTTP response. In some testing I've done I've been pulling down a significant amount of data from the database (570 thousand rows) which are stored in a List object and passed between the different blocks and it seems like even after the final block has been completed the memory isn't being released. Ram usage in Task Manager will spike up to over 2 GB and I can observe several large spikes as the List hits each block. The signatures for my blocks look like this: private TransformBlock<HttpListenerContext, Tuple<HttpListenerContext, QueryObject>> m_ParseHttpRequest; private TransformBlock<Tuple<HttpListenerContext, QueryObject>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_RetrieveDatabaseResults; private TransformBlock<Tuple<HttpListenerContext, QueryObject, List<string>>, Tuple<HttpListenerContext, QueryObject, List<string>>> m_ConvertResults; private ActionBlock<Tuple<HttpListenerContext, QueryObject, List<string>>> m_ReturnHttpResponse; They are linked as follows: m_ParseHttpRequest.LinkTo(m_RetrieveDatabaseResults); m_RetrieveDatabaseResults.LinkTo(m_ConvertResults, tuple => tuple.Item2 is QueryObjectA); m_RetrieveDatabaseResults.LinkTo(m_ReturnHttpResponse, tuple => tuple.Item2 is QueryObjectB); m_ConvertResults.LinkTo(m_ReturnHttpResponse); Is it possible that I can set up the pipeline such that once each block is done with the list they no longer need to hold on to it as well as once the entire pipeline is completed that the memory is released?

    Read the article

  • Can I override a theme function with a .tpl file?

    - by Nick Lowman
    Hi everyone, How would I go around overriding a theme function with a .tpl file? I know how to override a .tpl file with a theme function but not the other way round. I can't seem to find anywhere that tells me so, so maybe it's not possible or not good practise. For example if there was a theme function defined in a module called super_results and registered with the theme registry, like the example below, how would I go around overriding it with super_results.tpl.php. 'super_results' => array( 'arguments' => array('title' => NULL, 'results' => NULL, 'votes' => NULL), ), function modulename_super_results($title, $results,$votes){ output HTML }

    Read the article

  • Tracking Protection List in IE9

    - by Emanuele Bartolesi
    To protect the privacy when I surf over the internet, I use AdBlockPlus add-in for Firefox. But when I use Internet Explorer 9, this add-in don’t work. Internet Explorer 9 (and I hope Internet Explorer 10) has built in feature to add a TPL. There is a javascript function to call named msAddTrackingProtectionList. This function has two parameter: the first one is the link of TPL and the second one is the Title of TPL. To do this is very easy. Add this simple javascript function on your website or in a blank html page. <a href="javascript:window.external.msAddTrackingProtectionList('http://easylist-msie.adblockplus.org/easyprivacy.tpl', 'EasyList Privacy')">EasyPrivacy TPL</a> The effect is below: EasyPrivacy TPL After click appears a confirmation prompt. For security reason this javascript function can only be called from a user interaction: buttons, links, forms. For more information about msAddTrackingProtectionList function  go to Msdn Library. For more information about EasyList go to Easy List TPL.

    Read the article

  • Basic PHP question for Drupal Views theming

    - by oalo
    My PHP and programming knowledge is extremely basic, so this is probably a dumb question: I am theming a View in Drupal 6, and I want to add an id with a consecutive number to each item in the view (first item would have the id #item1, the second #item2, etc). I am customizing the style output (views-view-unformatted--MYVIEWNAME.tpl.php) and the row style output (views-view-fields--MYVIEWNAME.tpl.php), and I want to add a counter variable in the foreach loop in the style output tpl, and then use that variable in the row style output tpl, but the last one is not recognizing the variable. It does not give me any errors, but doesnt print the number. I understand this has probably something to do with variables visibility, how can I declare the counter variable in the style .tpl so I can the use it in the row style .tpl? Thank you

    Read the article

  • Extra white space, on HTML output, on PHP MVC

    - by user316841
    Hi, I'm getting extra white space, that is not CSS, or nothing like it on the view output: The HTML. I've checked for ? (removed, where I could), saved UTF8 without BOM. Checked for existent white space in the beginning of each file, even at end. This is the structure: index.php - this is the entry point; MODEL/ CONTROLLER/ VIEW/ Let's say, that trough method GET, its sent the var TPL with some value. Let's call it LIST, so it pulls the LIST model, with all data and then show the right template to the user, with the right data. I used and tested, with require_once, include_once, include, even tested with readfile (just to test). The LIST Template opens the header.tpl and footer.tpl; I also tryed to remove this both includes from LIST template, but still, the extra white space continued. This is where the extra white space is coming from. This controller is placed between controller activity runs here , this is where the extra white space is coming from: $model_works-getRows(); $rows = $model_works-rows; if ( !require_once('views/list_works.tpl.php') ) { echo "Error."; } // end if clause The list_works.tpl.php, is basicly HTML with tags; I've t tested by changing the extension to something else, like html. Also, just to remember that at top of this file, we are using require_once to open the header.tpl and at bottom the footer.tpl. I've tested by removing both and the extra white space was still generated. The extra white space is being generated here: EXTRA WHITE SPACE HERE Thanks a lot for looking, ;D

    Read the article

  • Javascript and jQuery (Fancybox) question

    - by songdogtech
    Javascript and jQuery (Fancybox) question I'm using the Javascript function below for Twitter sharing (as well as other services; the function code is simplified to just Twitter for this question) that grabs the to-be-shared page URL and title and it is invoked in the link with onclick. That results in the Twitter share page loading in a pop up browser window, i.e.<img src="/images/twitter_16.png" onclick="share.tw()" /> In order to be consistent with other design aspects of the site, what I'd like to be able to do is have the Twitter share page open not in a standard browser window but in a Fancybox (jQuery) window. Fancybox can load an external page in an iFrame when the img or href link contains a class (in this case class="iframe" ) in the link and in the document ready function in the header. Right now, of course, when I give the iframe class to the link that also has the onclick share.tw(), I get two popups: one browser window popup with the correct Twitter share page loaded, and a Fancybox jQuery popup that shows a site 404. How can I change the function to use Fancybox to present the Twitter share page? Is that a correct way to approach it? Or is there a better way, such as implementing the share function in jQuery, too? Thanks... Javascript share function: var share = { tw:function(title,url) { this.share('http://twitter.com/home?status=##URL##+##TITLE##',title,url); }, share:function(tpl,title,url) { if(!url) url = encodeURIComponent(window.location); if(!title) title = encodeURIComponent(document.title); tpl = tpl.replace("##URL##",url); tpl = tpl.replace("##TITLE##",title); window.open(tpl,"sharewindow"+tpl.substr(6,15),"width=640,height=480"); } }; It is invoked, i.e.: <img src="/images/twitter_16.png" onclick="share.tw()" /> Fancybox function, invoked by adding class="iframe" in the img or href link $(".iframe").fancybox({ 'width' : '100%', 'height' : '100%', 'autoScale' : false, 'transitionIn' : 'none', 'transitionOut' : 'none', 'type' : 'iframe' });

    Read the article

  • How to extend the Smarty class right

    - by Evolutio
    I have a Problem with Smarty. I have made this class: <?php require_once(INCLUDE_PATH.'smarty/Smarty.class.php'); class IndexPage extends Smarty { public $templateName = 'index.tpl'; public function __construct() { parent::__construct(); $this->showTemplate(); } public function showTemplate() { self::display('index.tpl'); } } ?> But why it doesn't works? Here is the Error: Fatal error: Uncaught exception 'SmartyException' with message 'Unable to load template file 'index.tpl'' in D:\xampp\htdocs\aiondb\lib\smarty\sysplugins\smarty_internal_templatebase.php:127 Stack trace: #0 D:\xampp\htdocs\aiondb\lib\smarty\sysplugins\smarty_internal_templatebase.php(374): Smarty_Internal_TemplateBase->fetch('index.tpl', NULL, NULL, NULL, true) #1 D:\xampp\htdocs\aiondb\lib\classes\IndexPage.class.php(14): Smarty_Internal_TemplateBase->display('index.tpl') #2 D:\xampp\htdocs\aiondb\lib\classes\IndexPage.class.php(10): IndexPage->showTemplate() #3 D:\xampp\htdocs\aiondb\index.php(3): IndexPage->__construct() #4 {main} thrown in D:\xampp\htdocs\aiondb\lib\smarty\sysplugins\smarty_internal_templatebase.php on line 127

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • extjs how to make a nested child using xTemplate when we don't know how deep is it?

    - by Ebo the gordon
    first, sorry if my english bad,.... in my script, variable tplData below is dynamic,... (lets say it generates from database) so, every chid, can have another child. and so on,.... now, i'm stack how to iteration it,.. var tplData = [{ name : 'Naomi White' },{ name : 'Yoko Ono' },{ name : 'John Smith', child : [{ name:'Michael (John\'s son)', child: [{ name : 'Brad (Michael\'s son,John\'s grand son)' },{ name : 'Brid (Michael\'s son,John\'s grand son)', child: [{ name:'Buddy (Brid\'s son,Michael\'s grand son)' }] },{ name : 'Brud (Michael\'s son,John\'s grand son)' }] }] }]; var myTpl = new Ext.XTemplate( '<tpl for=".">', '<div style="background-color: {color}; margin: 10px;">', '<b> Name :</b> {name}<br />', // how to make this over and over every child (while it has ) '<tpl if="typeof child !=\'undefined\'">', '<b> Child : </b>', '<tpl for="child">', '{name} <br />', '</tpl>', '</tpl>', /////////////////////////////////////// '</div>', '</tpl>' ); myTpl.compile(); myTpl.overwrite(document.body, tplData);

    Read the article

  • Call a function in an ExtJS XTemplate

    - by Snowright
    I'm familiar with using a function to determine a specific condition using xtemplate but not sure how to directly call a function without the conditional if statement. My code, for example, wants to append some characters to a string that I am using within my xtemplate. I think the best way to do it is append the characters when the xtemplate is rendered. var myTpl = new Ext.XTemplate( '<tpl for=".">', '<tpl if="this.isThumbnailed(thumbnailed) == true">', '<img src=this.getThumbUrl(rawThumbUrl)/>', //this call to function does not work, also tried variations of this. '</tpl>', '</tpl>', { isThumbnailed : function(thumbnailed) { return ...; }, getThumbUrl : function(rawThumbUrl) { //... //this function does not get called. return ...; } } )

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >